{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Fast Multiplication and the Discrete Fourier Transform\n",
"\n",
"In this handout we will introduce the discrete Fourier Transform and use it to develop a multiplication algorithm that is much faster than the standard \"schoolbook\" algorithm.\n",
"\n",
"Recall the standard algorithm uses a _quadratic_ number of word operations (in the length of the numbers being multiplied). In this handout we will develop an algorithm that uses a _quasilinear_ number of word operations.\n",
"\n",
"A function $f$ is said to be _quasilinear_ when $f(n)=O(n\\log(n)^k)$ for some positive $k$. Sometimes this is written as $f(n)=\\tilde{O}(n)$ (\"soft O\" notation) and $f$ is said to be \"softly linear\"."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Improving Multiplication: Karatsuba's Algorithm\n",
"\n",
"We will now cover the first multiplication algorithm that was discovered that asymptotically beats the quadratic running time of the standard \"schoolbook\" method.\n",
"\n",
"This method was discovered by [Anatoly Karatsuba](https://en.wikipedia.org/wiki/Anatoly_Karatsuba) as a student in 1960 and presented at Moscow State University in a seminar run by [Andrey Kolmogorov](https://en.wikipedia.org/wiki/Andrey_Kolmogorov). Kolmogorov credited Karatsuba and published a description of the algorithm in the _Proceedings of the USSR Academy of Sciences_ in 1962.\n",
"\n",
"The basic idea is simple: To find an alternative way of representing the multiplication of two polynomials $A$ and $B$ that uses fewer coefficient multiplications than the schoolbook method.\n",
"\n",
"Note that the schoolbook method multiplies two polynomials of degree $n-1$ uses exactly $n^2$ coefficient multiplications. For example, if $n=2$, $A=a_1x+a_0$, and $B=b_1x+b_0$, then\n",
"\n",
"\\begin{align*}\n",
"AB = a_1b_1x^2 + (a_1b_0+a_0b_1)x + a_0b_0 .\n",
"\\end{align*}\n",
"\n",
"The schoolbook method computes this by evaluating each of the coefficient products which appear in this expression, i.e., $a_1b_1$, $a_1b_0$, $a_0b_1$, and $a_0b_0$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### First Attempt: Decompose the product\n",
"\n",
"Suppose $A$ and $B$ have even degree $n=2m$. Then we can write them as\n",
"\n",
"\\begin{align*}\n",
"A &= A_1 x^{m} + A_0 \\\\\n",
"B &= B_1 x^{m} + B_0\n",
"\\end{align*}\n",
"\n",
"where $A_0,A_1,B_0,B_1$ have degree at most $m=n/2$. Explicitly, we have $A_0=\\sum_{i=0}^{m-1}a_ix^i$ and $A_1=\\sum_{i=0}^{m}a_{i+m}x^i$.\n",
"\n",
"Now we have\n",
"\n",
"$$ AB = A_1B_1 x^n + (A_1B_0+A_0B_1)x^m + A_0B_0 . \\tag{$*$} $$\n",
"\n",
"Computing this expression uses four multiplications of polynomials of degree at most $m$. However, recall that multiplication of two polynomials of degree $m$ uses $(m+1)^2=n^2/4+n+1$ coefficient multiplications. The total number of coefficient multiplications using this method would be $(m+1)^2+2(m+1)m+m^2=4m^2+4m+1$, which is $(n+1)^2$.\n",
"\n",
"Thus we haven't gained anything yet. However, this was only the simplest possible way of decomposing the product $AB$ and we can rearrange this decomposition to get an improvement.\n",
"\n",
"#### Side question: How many coefficient _additions_ does ($*$) require?\n",
"\n",
"There are three additions on polynomials of degree at most $2n$:\n",
"\n",
"* Adding $A_1B_0$ and $A_0B_1$\n",
"* Adding $A_0B_0$ to the result of the above (multiplied by $x^m$)\n",
"* Adding $A_1B_1x^n$ to the above\n",
"\n",
"Each polynomial addition uses $O(n)$ coefficient additions, so there are $O(n)$ total coefficient additions."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Alternative decomposition\n",
"\n",
"Note that\n",
"\n",
"$$(A_1+A_0)(B_1+B_0)=A_1B_1 + (A_1B_0+A_0B_1) + A_0B_0 . $$\n",
"\n",
"The terms on the right have a striking similarity to the terms that appear on the right of ($*$). For example, the parenthetized expression appears as the factor in front of $x^m$ in ($*$).\n",
"\n",
"By adding and subtracting $A_1B_1x^m$ and $A_0B_0x^m$ on the right side the above expression and using ($*$) we derive\n",
"\n",
"$$ AB = A_1B_1 (x^n - x^m) + (A_1+A_0)(B_1+B_0) x^m + A_0B_0 (1 - x^m) . \\tag{$**$} $$\n",
"\n",
"Computing the product $AB$ using this expression uses only _three_ multiplications of polynomials of degree $m$, although it does use more additions.\n",
"\n",
"Listing the multiplications and additions explicitly:\n",
"\n",
"#### Multiplications:\n",
"* $A_1B_1$\n",
"* $(A_1+A_0)(B_1+B_0)$\n",
"* $A_0B_0$\n",
"\n",
"#### Additions:\n",
"* $A_1+A_0$\n",
"* $B_1+B_0$\n",
"* $A_0B_0$ added to $-A_0B_0x^m$\n",
"* $A_1B_1x^n$ added to $-A_1B_1x^m$\n",
"* $(A_1+A_0)(B_1+B_0) x^m$ added to $A_0B_0 (1 - x^m)$\n",
"* $A_1B_1 (x^n - x^m)$ added to the result of the previous line\n",
"\n",
"### Why is this a big deal?\n",
"\n",
"The fact that there are additional polynomial additions isn't a dealbreaker since polynomial addition is cheap and each of the polynomials above have degree at most $2n$. But it would seem like using this alternative decomposition isn't that big of a deal since it only saves a single polynomial multiplication. In other words, the number of coefficient multiplications should be $3m^2=0.75\\cdot n^2$ which is still quadratic in $n$.\n",
"\n",
"The key insight is that we can apply this method _recursively_ on each of the three polynomial multiplications that need to be computed. Karatsuba's method is exactly that: compute the product from ($**$) and compute each of the polynomial multiplications $A_1B_1$, $(A_1+A_0)(B_1+B_0)$, and $A_0B_0$ recursively using Karatsuba's method.\n",
"\n",
"The above description started out by assuming that $n$ was even, but it can easily be adapted to polynomials of any degree (which is important if we are to apply it recursively).\n",
"\n",
"To do this, let $n=2^k$ be an upper bound on the degree of $A$ and $B$. The above description goes through exactly the same except that now the coefficient lists of $A$ and $B$ are padded with zeros (if necessary) in order to make the coefficient list have length $n+1$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Cost analysis of Karatsuba's algorithm\n",
"\n",
"Recursive algorithms are usually effectively analyzed using induction and Karatsuba's algorithm is no different.\n",
"\n",
"Let $T(n)$ denote the cost (in terms of the number of coefficient operations) of multiplying two polynomials of degree at most $n$.\n",
"\n",
"In fact, Karatsuba's algorithm uses $O(n^{\\operatorname{lg}3})$ coefficient operations (where $\\operatorname{lg}$ denotes the base-2 logarithm).\n",
"\n",
"Since $n=2^k$ is an upper bound on the degree of polynomials, $n/2=2^{k-1}$ is an upper bound on the degree of the polynomials used in the recursive calls in Karatsuba's algorithm. Since there are 3 recursive calls the recursive calls will cost at most $3T(n/2)$. The only other coefficient operations are in the 4 polynomial additions of degree at most $2n$. Altogether this will total at most $cn$ operations for some constant $c$, e.g., $c=8$ works (though this is not the best possible). Thus we have that\n",
"\n",
"$$ T(n) \\leq 3T(n/2) + cn . $$\n",
"\n",
"We will now prove that $T(2^k)\\leq c(3^{k+1}-2^{k+1})$ using induction on $k$.\n",
"\n",
"**Base case:** When $k=0$, $T(2^k)=5$ since multiplying two linear polynomials using Karatsuba's algorithm requires 3 coefficient multiplications and 2 coefficient additions. Therefore $5=T(2^0)\\leq c(3^1-2^1)=c$ holds (when $c\\geq5$).\n",
"\n",
"**Inductive step:** Suppose that $T(2^{k-1})\\leq c(3^{k}-2^{k})$. Then we have that\n",
"\n",
"\\begin{align*}\n",
"T(2^k) &\\leq 3T(2^{k-1}) + c2^k \\\\\n",
"&\\leq 3c(3^{k}-2^{k}) + c2^k &&\\text{by inductive hypothesis} \\\\\n",
"&= c(3^{k+1}-2^{k+1}) &&\\text{as required}.\n",
"\\end{align*}\n",
"\n",
"Now take $k=\\operatorname{lg}n$ so that $2^k=n$ and $3^k=(2^{\\operatorname{lg}3})^k=(2^k)^{\\operatorname{lg}3}=n^{\\operatorname{lg}3}$.\n",
"\n",
"Then $T(n)\\leq c(3n^{{\\operatorname{lg}3}}-2n)=O(n^{\\operatorname{lg}3})$ and so Karatsuba's algorithm applied to polynomials of degree $n$ runs in time $O(n^{\\operatorname{lg}3})$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Master theorem\n",
"\n",
"Note that the \"[master theorem]()\" can be used to directly determine the cost of Karatsuba's algorithm.\n",
"\n",
"The master theorem applies when a problem of size $n$ is recursively solved by an algorithm that makes $a$ recursive calls on subproblems that are a factor of $b$ smaller. If $f(n)$ is the cost of splitting the problem and combining the results to find the final solution then analyzing this algorithm leads to the cost recurrence\n",
"\n",
"$$ T(n) = a \\cdot T(n/b) + f(n) $$\n",
"\n",
"and when $f(n)=O(n^{\\log_b a-\\epsilon})$ for some $\\epsilon>0$ the master theorem says that $T(n)=O(n^{\\log_b a})$.\n",
"\n",
"In Karatsuba's algorithm we have $a=3$, $b=2$, and $f(n)=O(n)$.\n",
"\n",
"Since $f(n)=O(n)=O(n^{\\log_2 3-\\epsilon})$ for $\\epsilon\\approx0.58$ we have that $T(n)=O(n^{\\log_2 3})$ by the master theorem."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Takeaway\n",
"\n",
"Because $\\log_2 3\\approx1.585$ Karatsuba's method enables multiplying polynomials in $O(n^{1.59})$ coefficient operations which is much faster than the standard $O(n^2)$ approach. Furthermore, Karatsuba's algorithm can be adapted to multiply integers and the method is also effective in practice.\n",
"\n",
"For example, it has been [implemented in the GMP library](https://gmplib.org/manual/Karatsuba-Multiplication) as one of the algorithms for multiplying large integers. In fact, the GMP developers found that it could outperform the standard method for integers that fit in as little as 10 words.\n",
"\n",
"The following diagram shows the fastest algorithm (as benchmarked in GMP) for multiplying an integer of length $x$ by an integer of length $y$ (with both axes represented on a log scale).\n",
"\n",
"Karatsuba's method appears in pink and is referred to as \"toom22\" because it splits both operands into 2 parts (and is named after Andrei Toom, who in 1963 described a generalization of Karatsuba's method that splits the polynomial into more than 2 parts).\n",
"\n",
"![Multiplication benchmark results](log.k10.550.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Faster Multiplication: The Fast Fourier Transform\n",
"\n",
"Now we will discuss an even better multiplication algorithm: one that is nearly linear in $n$, the degree of the polynomials (or length of the integers). It is based on a fundamental operation that has many varied applications: the Fourier transform.\n",
"\n",
"In particular, we will cover the _discrete Fourier transform_ (DFT) and how to compute it very efficiently using an algorithm known as the _fast Fourier transform_ (FFT). The transform is called \"discrete\" because it operates on a discrete set of datapoints. The Fourier transform can also be applied to continuous real-valued functions and this is very useful in fields such as signal processing."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prelude: Multiplication via interpolation\n",
"\n",
"Recall the algorithm for multiplying polynomials we saw that was based on interpolation. It had the following steps for multiplying two polynomials $A$ and $B$ of degree $n$:\n",
"\n",
"1. Choose $2n+1$ distinct points and evaluate $A$ and $B$ at those points to get values $\\alpha_0,\\dotsc,\\alpha_{2n}$ and $\\beta_0,\\dotsc,\\beta_{2n}$.\n",
"2. Multiply the $\\alpha_i$ and $\\beta_i$ together.\n",
"3. Use interpolation to find the unique polynomial $C$ of degree $2n$ which has value $\\alpha_i\\beta_i$ at the $i$th chosen point. It follows that $C$ is the result of multiplying $A$ by $B$.\n",
"\n",
"We saw that steps 1 and 3 cost $O(n^2)$ while step 2 only costs $O(n)$. This is interesting because it is in the cheapest step where the multiplication \"actually happens\"; steps 1 and 3 are really just changing the representation of the polynomial. More precisely, step 1 changes the representation to a set of point-value pairs and then step 3 changes the representation back to a coefficient list.\n",
"\n",
"#### How could this be improved?\n",
"\n",
"Note that in step 1 we have a completely free choice of which points to evaluate $A$ and $B$ on. The fundamental insight that lies behind the fast multiplication algorithm is to choose the points in such a way that steps 1 and 3 can be performed much more quickly by exploiting the structure of the chosen points.\n",
"\n",
"\n",
"\n",
"As mentioned above, step 1 can be viewed as \"transforming\" the coefficient list representation of a polynomial to a point-value pair representation. In fact, we will now see how to choose the points so that this transformation is given by the Fourier transform."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enter Fourier\n",
"\n",
"Joseph Fourier was a French mathematician who developed the theory behind what are now known as Fourier series. He introduced them to solve a partial differential equation known as the [heat equation](https://en.wikipedia.org/wiki/Heat_equation).\n",
"\n",
"It was later realized that Fourier series have a huge number of other practical uses: Wikipedia lists applications to electrical engineering, vibration analysis, acoustics, optics, signal processing, image processing, quantum mechanics, econometrics, and shell theory, among others.\n",
"\n",
"For our purposes we will use _discrete Fourier series_ or more commonly called the _discrete Fourier transform_.\n",
"\n",
"### Optimal selection of points\n",
"\n",
"Which points should we choose in order to improve the multiplication-via-interpolation algorithm?\n",
"\n",
"One possibility would be to choose the points to be powers of a fixed base, e.g., choose the points to be $2^0,2^1,2^2,\\dotsc,2^{2n}$.\n",
"\n",
"This choice would allow some computations to be reused. For example, in order to evaluate $A(2)$ one uses $2,2^2,\\dotsc,2^n$ and in order to evaluate $A(2^2)$ one uses $2^2,2^4\\dotsc,2^{2n}$ (so these would not have to be recomputed).\n",
"\n",
"There are more values which could be reused but not all, since for example the evaluation of $A(2^{2n})$ uses $(2^{2n})^n=2^{2n^2}$ which will not have been computed before. This scheme also has the drawback that the numbers involved get extremely large. It could be improved if the numbers involved in evaluating $A$ were not too large and correspond _exactly_ with the set of points.\n",
"\n",
"In other words, to chose points $p_0,\\dotsc,p_{2n}$ so that \n",
"\n",
"$$ \\bigl\\{\\, p_i^j : 0\\leq i,j\\leq 2n \\,\\bigr\\} = \\{p_0,\\dotsc,p_{2n}\\} . $$\n",
"\n",
"This can be done if the points are chosen to form a _group_ under multiplication. This is a set of numbers containing the identity $1$ and which is closed under multiplication and division. The multiplication in a group must also satsisfy the associative law $(a\\cdot b)\\cdot c=a\\cdot(b\\cdot c)$, though this will always hold (indeed by definition it holds in any ring).\n",
"\n",
"In particular, we will chose points $p_0,\\dotsc,p_{2n}$ to form a _cyclic group_ of order $2n+1$. A cyclic group has a generator element from which all group elements can be obtained simply by raising the generator to higher powers.\n",
"\n",
"### Roots of unity\n",
"\n",
"A _root of unity_ is an element $\\omega$ such that $\\omega^n=1$ for some positive integer $n$. For example, $1$ is a trivial root of unity.\n",
"\n",
"If $\\omega^n=1$ then $\\omega$ is also said to be an $n$th root of unity.\n",
"\n",
"Furthermore, if $\\omega^n=1$ but $\\omega^m\\neq1$ for all $1\\leq m1], aspect_ratio=1)\n",
"non_prim_roots + prim_roots"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Evaluation and interpolation at roots of unity\n",
"\n",
"Let's consider our previous algorithm for multiplying polynomials but now using roots of unity as the chosen points.\n",
"\n",
"Input: Polynomials $A$ and $B$ of degree at most $n$.\n",
"\n",
"**Step 0:** Set $N=2n+1$ and compute the $N$th roots of unity $\\omega^0,\\dotsc,\\omega^{N-1}$ where $\\omega:=e^{2\\pi i/N}$.\n",
"\n",
"**Step 1:** Evaluate $A$ and $B$ at the $N$th roots of unity to obtain $\\alpha_k$ and $\\beta_k$, i.e.,\n",
"\n",
"$$ \\alpha_k := A\\bigl(\\omega^{k}\\bigr), \\qquad \\beta_k := B\\bigl(\\omega^{k}\\bigr) \\qquad\\text{for $k=0,1,\\dotsc,N-1$}. $$\n",
"\n",
"**Step 2:** For each $0\\leq k\n",
"$\\DFT_{\\omega^{-1}}\\bigl([10,-2-2i,-2,-2+2i]\\bigr)$ and then multiply by $\\frac{1}{N}=\\frac{1}{4}$.\n",
"Computing this vector entry-by-entry:\n",
"\n",
"Its first entry is $$10 + (-2-2i) + (-2) + (-2+2i) = 4 . $$\n",
"\n",
"Its second entry is $$10 + (-2-2i)\\omega^{-1} + (-2)\\omega^{-2} + (-2+2i)\\omega^{-3} = \n",
"10 + (-2-2i)(-i) + (-2)(-1) + (-2+2i)(i) = 8.$$\n",
"\n",
"Its third entry is $$10 + (-2-2i)\\omega^{-2} + (-2)\\omega^{-4} + (-2+2i)\\omega^{-6} =\n",
"10 + (-2-2i)(-1) + (-2) + (-2+2i)(-1) = 12.$$\n",
"\n",
"Its fourth entry is $$10 + (-2-2i)\\omega^{-3} + (-2)\\omega^{-6} + (-2+2i)\\omega^{-9} =\n",
"10 + (-2-2i)(i) + (-2)(-1) + (-2+2i)(-i) = 16.$$\n",
"\n",
"Thus $\\frac{1}{N}\\DFT_{\\omega^{-1}}\\bigl([10,-2-2i,-2,-2+2i]\\bigr)=\\frac{1}{4}[4,8,12,16]=[1,2,3,4]$ as expected.\n",
"\n",
"### Computation\n",
"\n",
"Note that the FFT algorithm also equally applies to computing $\\DFT_{\\omega^{-1}}$ quickly; one merely replaces $\\omega$ with $\\omega^{-1}$ when performing the FFT. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Fast Multiplication: Putting it Together\n",
"\n",
"Combining the fast Fourier transform with the evaluation/interpolation multiplication algorithm we finally achieve a fast multiplication algorithm.\n",
"\n",
"In detail, the steps of the algorithm are as follows on input polynomials $f,g\\in\\mathbb{C}[x]$ of degree $n$:\n",
"\n",
"Let $N:=2n+1$ and $\\omega:=e^{2\\pi i/N}$. Say $A$ and $B$ are the coefficient vectors of $f$ and $g$ (extended with zeros to be of length $N$).\n",
"\n",
"**Step 1.** Compute $\\DFT_\\omega(A)$ and $\\DFT_\\omega(B)$ using the fast Fourier transform.\n",
"\n",
"**Step 2.** Compute the entrywise product $C:=\\DFT_\\omega(A)*\\DFT_\\omega(B)$.\n",
"\n",
"**Step 3.** Compute $\\DFT^{-1}_\\omega(C)$ using the fast Fourier transform. Its entries will be the coefficient vector of the product $f\\cdot g$.\n",
"\n",
"The correctness of this algorithm follows from the correctness of the evaluation/interpolation multiplication algorithm.\n",
"\n",
"It also relies on the fact that computing a DFT is equivalent to evaluatating a polynomial at powers of $\\omega$ and computing an inverse DFT is equivalent to interpolating a polynomial from its evaluations at powers of $\\omega$.\n",
"\n",
"### Cost analysis\n",
"\n",
"Step 1 does two FFTs and uses $O(N\\log N)$ operations.\n",
"\n",
"Step 2 does $N$ coefficient multiplications and uses $O(N)$ operations.\n",
"\n",
"Step 3 does one inverse FFT and uses $O(N\\log N)$ operations.\n",
"\n",
"Since $N=2n+1$ these costs can equivalently be presented using $n$ instead of $N$.\n",
"\n",
"Thus, in total we can multiply two polynomials of degree $n$ using $O(n\\log n)$ operations.\n",
"\n",
"### Caveats\n",
"\n",
"This analysis assumed we were working over the complex numbers where primitive $N$th roots of unity exist.\n",
"\n",
"The algorithm can also be used over any field containing primitive $N$th roots of unity, though they do not exist in all fields. Primitive $(p-1)$th roots of unity do exist in $\\mathbb{Z}_p^*$ but they do not exist in $\\mathbb{Q}$ or $\\mathbb{R}$ (with the exception of $-1$).\n",
"\n",
"If $p$ is a prime of the form $k2^m+1$ then $\\mathbb{Z}_p^*$ contains a primitive $N$th root of unity with $N:=2^m$ so the multiplication algorithm we described will also work to multiply two polynomials in $\\mathbb{Z}_p[x]$ of degree less than $N/2$.\n",
"\n",
"The algorithm can also be adapted to coefficient rings like which do not contain primitive roots of unity by constructing \"artificial\" primitive roots of unity. This increases the total running cost by a factor of $\\log\\log N$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Integer Multiplication\n",
"\n",
"The algorithm can also be relatively straightforwardly adapted to multiply integers quickly.\n",
"\n",
"In 1971, [A. SchÃ¶nhage and V. Strassen](https://link.springer.com/article/10.1007/BF02242355) presented an FFT-based algorithm for multiplying two integers of $n$ words that runs in $O(n\\log n\\log\\log n)$ word operations. Their algorithm is also effective in practice (e.g., it is implemented in the GMP library).\n",
"\n",
"Slight improvements to this running time were discovered by various researchers over the past fifty years.\n",
"\n",
"Surprisingly, it was only in 2019 when [D. Harvey and J. van der Hoeven](https://annals.math.princeton.edu/2021/193-2/p04) presented an algorithm matching the polynomial multiplication runtime and multiplies two integers of $n$ words in $O(n\\log n)$ word operations. (However, their algorithm is purely theoretical and not used in practice.)\n",
"\n",
"### Can this be improved?\n",
"\n",
"It is currently unknown if multiplication algorithms exist that use fewer than $O(n\\log n)$ word/coefficient operations—none have ever been found.\n",
"\n",
"Some researchers have conjectured that $O(n\\log n)$ word/coefficient operations is indeed the fastest possible running time of multiplication."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "SageMath 9.5",
"language": "sage",
"name": "sagemath"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
},
"latex_metadata": {
"author": "Curtis Bright",
"date": "November 9, 2022",
"title": "Computational Mathematics: Handout 09"
}
},
"nbformat": 4,
"nbformat_minor": 2
}