Week 3 – Displaying SageMath Colormaps with Matrix Plots and a New Background

Ok, today is a good day if you like colorful pictures! The goal today is to display the various colormaps that SageMath has to offer. Those colormaps can be called via

input: sorted(colormaps)
output: ['Accent', 'Blues', 'BrBG', 'BuGn', 'BuPu', 'CMRmap', 'Dark2', 'GnBu', 'Greens', 'Greys', 'OrRd', 'Oranges', 'PRGn', 'Paired', 'Pastel1', 'Pastel2', 'PiYG', 'PuBu', 'PuBuGn', 'PuOr', 'PuRd', 'Purples', 'RdBu', 'RdGy', 'RdPu', 'RdYlBu', 'RdYlGn', 'Reds', 'Set1', 'Set2', 'Set3', 'Spectral', 'Wistia', 'YlGn', 'YlGnBu', 'YlOrBr', 'YlOrRd', 'afmhot', 'autumn', 'binary', 'bone', 'brg', 'bwr', 'cool', 'coolwarm', 'copper', 'cubehelix', 'flag', 'gist_earth', 'gist_gray', 'gist_heat', 'gist_ncar', 'gist_rainbow', 'gist_stern', 'gist_yarg', 'gnuplot', 'gnuplot2', 'gray', 'hot', 'hsv', 'jet', 'nipy_spectral', 'ocean', 'pink', 'prism', 'rainbow', 'seismic', 'spring', 'summer', 'tab10', 'tab20', 'tab20b', 'tab20c', 'terrain', 'winter']

To demonstrate these colormaps, I want to choose a few different interesting-looking matrices. For example, define the 20 \times 20 matrix M by setting the (i,j)^{th} entry equal to i^2 + j^2, defined over the field \mathbb{F}_{53}. In code,

M = matrix(GF(53),20,20,lambda i,j: i^2+j^2)
show(M)

This gives the matrix

We can then generate the matrix plots using

for col in sorted(colormaps):
    matrix_plot(M,cmap=col)

To save each image, replace “matrix_plot(M,cmap=col)” with “matrix_plot(M,cmap=col).save(‘(i^2+j^2)_colormap_’+col+’.png’)” and then go to your files. They’ll all be listed there! Here they are:


If we replace the + with a -, meaning the (i,j)^{th} entry is i^2 - j^2, then we get these instead:


Let’s choose a specific colormap to better see the difference and similarities between these two equations. For the 28 \times 28 matrices below, the left is defined with (i,j)^{th} entry equal to i^2 + j^2, while the right is defined by i^2 - j^2, both over \mathbb{F}_{53}, the finite field of 53 elements. This means we only have entries between 0 and 52 and we work mod 53. The colormap is OrRd. Notice the parabolic shape on the left and the hyperbolic shape on the right.

And these are the same matrices with the Accent colormap.

That color scheme actually fits fairly nicely with the header color scheme, which inspires me. The background picture of the blog before this post was this:

But this meant that if you were on a long page, like Blog, then the pattern would tile and repeat. But they didn’t match up, so you got these horrible cuts that just look so bad, like this:

To fix this, we want to choose a picture where the top and bottom match up, and the left and right match up, so that when the pictures are tiled, they line up seamlessly. Let’s do this mathematically! The key is to use modular arithmetic.

If we limit the amount of colors to 13 (meaning we work in \mathbb{F}_{13}), then we know that the coordinates (x,y), (x + 13, y), (x, y + 13), and (x+13,y+13) will have the same color, i.e. we repeat every 13 pixels. In Sage, the finite field \mathbb{F}_{13} is written as GF(13), meaning Galois Field.

Keeping with the theme, let’s choose a 13 \times 13 matrix with entries i^2 + j^2, over \mathbb{F}_{13}. In code,

M = matrix(GF(13),13,13,lambda i,j: i^2+j^2)
matrix_plot(M,cmap='Accent')

Then we are guaranteed a seamless connection. For the proof, we can check!

Or…just look at the background now!


Bonus Pic! This is a 24 \times 24 matrix defined over \mathbb{F}_{13} with entries i^2 + j^2. This collage also inspired the background, which is the very first one, Accent.

Thank you for reading!

Jonathan Gerhard

Posted in MathematicalADD, Weekly Posts | Tagged , , , , , , , , , | Leave a comment

Week 2 – Pictures and Matrices and Pictures

The guiding idea is that a picture is made up of pixels, each holding a color. If we line up every color used, and index them 0,1,2,3,…, then we can turn any picture into a matrix of numbers. Conversely, if we have a matrix with positive whole number entries, we can turn it into a picture by assigning each number a color and then filling in (this is called a color map). To demonstrate, let’s take a big matrix, 32×32, to create a relatively small picture, with entries 0,1,2,3,4,5,6,7,8,9,10. For those interested, this was generated in Sagemath with the code

M = random_matrix(GF(11),32,32,algorithm='unimodular'); show(M)

This forms a 32×32 pixel picture with 0 being white, 10 being black, and the rest being shades of grey in between.

One matrix transformation that I thought would be particularly interesting is taking inverses. If we had a perfectly nice picture, convert it to a matrix, take the inverse of that matrix, and then turn it back into a picture, then what would it look like? Would there be anything left of a pattern in the inverse picture? Let’s do our first example! Though the matrix and image above is just random, we know it’s unimodular so we can take its inverse. In fact, since we’re taking entries in GF(11), the finite field of 11 elements, our matrix inverse will also have entries in GF(11), and we can be sure to get a matrix that can turn into a picture. Here it is! First, the matrix inverse:

And the resulting plot:

Not surprisingly, it’s hard to really tell if there’s any similarity in these simplified pictures, but the idea is the important part! To see the difference a little clearer, here’s a crossfade between the two:

Looks almost like a beating heart! Sage actually has a few different color maps if we don’t want to just work in black and white:

sage: sorted(colormaps)
output: ['Accent', 'Blues', 'BrBG', 'BuGn', 'BuPu', 'CMRmap', 'Dark2', 'GnBu', 'Greens', 'Greys', 'OrRd', 'Oranges', 'PRGn', 'Paired', 'Pastel1', 'Pastel2', 'PiYG', 'PuBu', 'PuBuGn', 'PuOr', 'PuRd', 'Purples', 'RdBu', 'RdGy', 'RdPu', 'RdYlBu', 'RdYlGn', 'Reds', 'Set1', 'Set2', 'Set3', 'Spectral', 'Wistia', 'YlGn', 'YlGnBu', 'YlOrBr', 'YlOrRd', 'afmhot', 'autumn', 'binary', 'bone', 'brg', 'bwr', 'cool', 'coolwarm', 'copper', 'cubehelix', 'flag', 'gist_earth', 'gist_gray', 'gist_heat', 'gist_ncar', 'gist_rainbow', 'gist_stern', 'gist_yarg', 'gnuplot', 'gnuplot2', 'gray', 'hot', 'hsv', 'jet', 'nipy_spectral', 'ocean', 'pink', 'prism', 'rainbow', 'seismic', 'spring', 'summer', 'tab10', 'tab20', 'tab20b', 'tab20c', 'terrain', 'winter']

Here’s a few different colormaps, each with new random matrices generated:

Dark2
Rainbow
Pink
Cubehelix

And let’s finish with the ultra-colorful colormap hsv:

hsv

That’s it for now! Thanks for reading!

Posted in MathematicalADD, Random, Weekly Posts | Tagged , , , , | 1 Comment

Week 1 – Visualizing Unimodular Matrices

Something that has interested me recently is that a picture is just a collection of pixels which contain some color information. Which means that every image can just be turned into a matrix of numbers – what happens to our picture if we do transformations to that matrix?

In particular, I’m led to wondering about when that matrix is invertible, and what the “inverse” would look like? Not the actual inverted image, but the image that we get when we take the inverse of the matrix and turn it back into an image. This idea immediately has its problem – just because a matrix has positive integers as entries doesn’t mean its inverse will too. To be sure of this, we restrict to unimodular matrices. A matrix is called unimodular if it has determinant 1 or -1. This guarantees that it has an inverse also with integer (though not necessarily positive) entries.

We can list out all 2×2 unimodular matrices with only 0 and 1 as possible entries – denoted U_2(GF(2)) – to get

\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}, \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix}, \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}, \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \begin{bmatrix} 0 & 1 \\ 1 & 1 \end{bmatrix}, \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}

If we turn these into images, with 0 being white and 1 being black, we get a cool way to visualize these matrices!

Well, I think it’s cool anyway! Here’s a nice animated journey through the 6 unimodular matrix images.

For U_3(GF(2)), we have 128 unimodular matrices! I won’t list those all, but here’s a gif that includes every one.

And since variety is the spice of life, I figured we’d add some grey to the mix by looking at U_2(GF(3)), the set of 2×2 unimodular matrices with entries 0 (white),1(grey), and 2(black). There are 21 total.

And that’s all for this week! Pretend like this got out on Sunday…Thank you for reading!

Posted in MathematicalADD, Random, Weekly Posts | Tagged , , | 1 Comment

Does the bouncing DVD logo ever hit the corner?

Screen Shot 2018-09-27 at 9.17.40 PM.png

Posted in MathematicalADD, Random | Leave a comment

Mathematics and Myself from July 2013

In the process of deciding which things to take with me to Michigan, I came across something really really cool! It was a print-out of an email from July 26, 2013 that I had sent to my mom, asking for comments and suggestions on an essay I had written for the MetroDukes Scholarship at JMU. Reading through it was such a shock for me! What do you think?

    When I was in 11th grade, people would often ask, “What do you want to do when you get older?” and I couldn’t come up with an honest response. I just didn’t know. But at the end of that school year, we began to look at an intro to calculus, and I was hooked. This slight exposure to a new type of Mathematics sparked an interest that I heavily explored. By the start of 12th grade, I had already looked through and learned everything that would be taught in my AP Calculus BC class that year.

Now I know what I want to do: study mathematics. But I couldn’t have figured it out without the help of Mr. Baird, my 10th and 11th grade math teacher. He constantly encouraged me to explore the math I was interested in, even if he wasn’t teaching it at that moment. But as my love of math continued to grow, I had a realization: Most others don’t possess the same love I did.

I attempted to talk to others about the amazing properties of Gabriel’s Horn or the innumerable applications Calculus has on the real world and on all of mathematics before it but no one was really interested. I continued to console in Mr. Baird as my explorations took me into Multi-variable Calculus, Set Theory, and Topology. As I learned more and more, however, it continued to trouble me that others didn’t enjoy math as much as I did. I joined the Math Honor Society and began to tutor students to try and expose them to the true and intuitive beauty of mathematics.

It worked! The happiness that shown on their faces as they finally understood various concepts quickly became addicting to me. I needed that feeling, and so I began to look at Mathematics in a way that would help me teach it later. This let me gain a deeper understanding of the material and accelerated my learning greatly. I became positive that the reason the love of math is so sparse is because of the lack of good teachers. It was in that moment that I became positive I wanted to study math. I planned then to get a BS in Math, continue in my Master’s, and even work to enter a PhD program in Mathematics.

And James Madison University has already enabled me to explore my dream. With the help of my advisor, I was able to take Calculus 3 and Discrete Mathematics in my very first semester at JMU. Because of this, I can take Linear Algebra and Differential Equations my second semester, and begin focusing on higher level math my sophomore year at JMU. The fact that this opportunity is open to me makes me ecstatic. JMU has helped me accelerate my learning, and set me on the course to help others in their learning. And for that, I am proud to be a Duke!

I want to again thank Mr. Baird (though I’d be surprised if he ever saw this post…) for inspiring me to study mathematics. Before I even started at JMU (remember, July 26th, 2013 was the summer before my Freshman year), I knew I wanted to major in Math and even get a PhD in Math. Little did I know, I’d be skipping the Master’s part altogether!

And wow, the last sentence of the second paragraph might be something I’ve said word-for-word four years later while discussing Math 167 (something certainly deserving of its own blog post). Back then, I had no idea what the next four years at JMU would be like, but I’m pretty happy with what unfolded – I can’t wait to see where the next five take me!

Thanks for reading,

Jonathan Gerhard

P.S. I know you’re curious, did I get the scholarship?? Nope. It was good practice for the next hundred or so scholarships I applied to though!

Posted in MathematicalADD, Personal | Leave a comment

On a set that detects the convergence of a sum of reciprocals using relative density: A proposal for an indicator set and a proof of its nonexistence.

Natural Density

Suppose we take a subset A = \{a_1, a_2, \dots \} of the natural numbers \mathbb{N} = \{1, 2, \dots \} and look at the sum of the reciprocals of its elements:

\sum\limits_{n = 1}^{\infty} \frac{1}{a_n}.

One question we can ask here is: for which subsets A \subset \mathbb{N} does this sum converge? Though there are many routes one can take to try to answer this question, we look towards density.

The natural density of A \subset \mathbb{N} is defined to be

\delta(A) = \lim\limits_{n \rightarrow \infty} \frac{|A \cap [1,n]|}{n}.

We think of this essentially as the probability of choosing an element of A from the natural numbers. Let’s do some examples!

  • We certainly have \delta(\mathbb{N}) = 1.
  • If A \subset B, then \delta(A) \leq \delta(B).
  • Let k\mathbb{N} = \{k, 2k, 3k, \dots \}. There are \lfloor n/k\rfloor elements of k\mathbb{N} less than n, so

\delta(k\mathbb{N}) = \lim\limits_{n \rightarrow \infty} \frac{\lfloor n/k\rfloor}{n} = \lim\limits_{n \rightarrow \infty} \frac{1}{k} = \frac{1}{k}.

  • Let A = \{1, 4, 9, \dots \}. Then we need to know how many squares there are less than n. If k^2 < n, then k < \sqrt{n}, so there are \lfloor \sqrt{n}\rfloor squares less than n. Therefore

\delta(A) = \lim\limits_{n \rightarrow \infty} \frac{\lfloor \sqrt{n}\rfloor}{n} = \lim\limits_{n \rightarrow \infty} \frac{1}{\sqrt{n}} = 0.

Notice that \sum_{n = 1}^{\infty} \frac{1}{n} (being the harmonic series) diverges and \delta(\mathbb{N}) \neq 0. Also,\sum_{n = 1}^{\infty} \frac{1}{n^2} (being the Riemann zeta function evaluated at 2) converges to \pi^2/6 and we just showed \delta(\{1, 4, 9, \dots \}) = 0. Perhaps this is our connection!

(Wishful-Thinking “Theorem”) For any A \in \mathbb{N},

\delta(A) = 0 ~~\text{if and only if}~~\sum_{n = 1}^{\infty} \frac{1}{a_n} < \infty.

As the name of the above “theorem” hints at, this statement is not true. Our counterexample is a prevalent character: the primes. Recall the prime number theorem, which states that the number of primes less than some number n is asymptotically n/\log(n). Then the natural density of the primes P = \{2, 3, 5, \dots \} must be

\delta(P) =\lim\limits_{n \rightarrow \infty}\frac{n/\log(n)}{n} =\lim\limits_{n \rightarrow \infty}\frac{1}{\log(n)} = 0.

But in 1737, Euler proved that\sum_{p \in P} \frac{1}{p} diverges! So our wishful-thinking theorem is not true. We call a set A \subset \mathbb{N} bad if it doesn’t satisfying our wishful-thinking theorem.

Relative Density

To me, this failure means that the natural numbers are too “large” to detect all sets whose sum of reciprocals diverge. So what if we shrank it? For A,B \subset \mathbb{N}, let’s define the relative density of A in B to be

\rho(A,B) = \lim\limits_{n \rightarrow \infty} \frac{|A \cap B \cap [1,n]|}{|B \cap [1,n]|}.

Like the natural density, we think of the relative density of A in B to be the probability that we choose an element of A out of B. In fact, notice that

\delta(A) = \rho(A, \mathbb{N}).

As such, we get similar properties as the natural density, with some interesting twists!

  1. If A \subset B, then \rho(B,A) = 1.
  2. If A \cap B = \emptyset, then \rho(A,B) = \rho(B,A) = 0.
  3. If A \subset B \subset C, then \rho(A,C) = \rho(A,B)\rho(B,C).
  4. If A \subset B and A \subset C, then \rho(A,B)\rho(B,C) = \rho(A,C)\rho(C,B).
  5. Most generally, given any A, B, C \subset \mathbb{N}, we have

\rho(A,B)\rho(B,C)\rho(C,A) = \rho(A,C)\rho(C,B)\rho(B,A).

As all other properties follow from 5, we will only prove that fact.

(Proof of 5)  This is essentially just a reordering of multiplication:

\rho(A, B)\rho(B,C)\rho(C,A) = \lim\limits_{n \rightarrow \infty} \frac{|A \cap B \cap [1,n]|}{|B \cap [1,n]|}\frac{|B \cap C \cap [1,n]|}{|C \cap [1,n]|}\frac{|C \cap A \cap [1,n]|}{|A \cap [1,n]|}.

We shift all denominators to the right (cycling at the end) and find

\rho(A, B)\rho(B,C)\rho(C,A) = \lim\limits_{n \rightarrow \infty} \frac{|A \cap B \cap [1,n]|}{|A \cap [1,n]|}\frac{|B \cap C \cap [1,n]|}{|B \cap [1,n]|}\frac{|C \cap A \cap [1,n]|}{|C \cap [1,n]|}.

= \rho(B,A)\rho(C,B)\rho(A,C).

Indicator Sets

Our main purpose in studying relative density is the hope that it will lead to an indicator set. We call a set S \subset \mathbb{N} an indicator set if for all A  \subset \mathbb{N}, we have

\rho(A,S) = 0 ~~\text{if and only if}~~ \sum\limits_{a \in A} \frac{1}{a} < \infty.

So S is an indicator set if our wishful-thinking “theorem” is true when \mathbb{N} is replaced by S. We call it this because while \mathbb{N} is “too large”, S is “small enough” to indicate the convergence of the sum of reciprocals of elements of A. There are a lot of interesting properties that indicator sets must have if they exist, but it turns out we can prove their nonexistence with two simple propositions.

(Proposition 1) If \mathcal{U} is an indicator set, then \rho(\mathcal{U}, \mathbb{N}) = 1.

(Proof) If \rho(\mathcal{U},\mathbb{N}) < 1, we must have \rho(\mathcal{U}^c, \mathbb{N}) = 1 -\rho(\mathcal{U},\mathbb{N})  > 0, implying \sum_{v \in \mathcal{U}^c} \frac{1}{v} = \infty. But \rho(\mathcal{U}^c, \mathcal{U}) = 0, so by the definition of \mathcal{U}, we must have \sum_{v \in \mathcal{U}^c} \frac{1}{v} < \infty. This is a contradiction, so \rho(\mathcal{U},\mathbb{N}) = 1.

The second proposition is not even a statement on indicator sets, but on relative density in general.

(Proposition 2) Given any A,B \subset \mathbb{N}, we have \rho(A \cap B, \mathbb{N}) = \rho(A,B)\rho(B,\mathbb{N}).

(Proof)  We have

\rho(A \cap B, \mathbb{N}) = \lim\limits_{n \rightarrow \infty} \frac{|A \cap B \cap [1,n]|}{n} = \lim\limits_{n \rightarrow \infty} \frac{|A \cap B \cap [1,n]|}{n}\frac{|B \cap [1,n]|}{|B \cap [1,n]|}.

We swap the denominators to find

\rho(A \cap B, \mathbb{N}) = \lim\limits_{n \rightarrow \infty} \frac{|A \cap B \cap [1,n]|}{|B \cap [1,n]|}\frac{|B \cap [1,n]|}{n} = \rho(A,B)\rho(B,\mathbb{N}).

We can now prove our main theorem!

(Main Theorem) There does not exist an indicator set. That is, there does not exist a set \mathcal{U} such that for all A \subset \mathbb{N}, we have \rho(A,\mathcal{U}) = 0 if and only if \sum\limits_{a \in A} \frac{1}{a} < \infty.

(Proof) Suppose there exists an indicator set \mathcal{U}. Let A be any bad set (like the primes, for example). Then proposition 1 and 2 together tell us that \rho(A, \mathcal{U}) = \rho(A \cap \mathcal{U}, \mathbb{N}). Since A is bad, we have \rho(A, \mathcal{U}) > 0 and \rho(A, \mathbb{N}) = 0. But since A \cap \mathcal{U} \subset A, we also have \rho(A \cap \mathcal{U}, \mathbb{N}) \leq \rho(A, \mathbb{N}). Together, these give

0 < \rho(A, \mathcal{U}) = \rho(A \cap \mathcal{U}, \mathbb{N}) \leq \rho(A, \mathbb{N}) = 0,

which is clearly a contradiction. Therefore, \mathcal{U} cannot exist.

Final Remarks

Note that a “left-indicator set” (as opposed to the “right-indicator set” discussed above) can quickly be shown to not exist as well. Suppose \mathcal{U} was a set such that for all A \subset \mathbb{N}

\rho(\mathcal{U},A) = 0 ~~\text{if and only if}~~ \sum\limits_{a \in A} \frac{1}{a} < \infty.

Then property 1 tells us that for any subset \mathcal{U}^{\prime}, we have \rho(\mathcal{U},\mathcal{U}^{\prime}) = 1, which would imply that \sum_{u^{\prime} \in\mathcal{U}^{\prime}} \frac{1}{u^{\prime}} = \infty for all subsets \mathcal{U}^{\prime} \subset \mathcal{U}, which certainly cannot be true.

While the proof of the nonexistence of these indicators sets ended up being fairly short, there are still interesting things going on here. There has been a lot of work relating natural density, sum of reciprocals, and the containment of arbitrarily long arithmetic progressions (ALAPs). These relate as follows.

Screen Shot 2017-05-21 at 9.13.16 PM

If the text inside a box is green, it indicates that the row implies the column (with the text referencing the proof). If it is red, then it implies the row does not imply the column (with the text giving a counterexample). Blue text indicates that the implication is currently an open question.

\sum\limits_{k = 0}^{\infty} \sum\limits_{t = 0}^{k} \frac{1}{10^k + t} \leq \sum\limits_{k = 0}^{\infty} \frac{k+1}{10^k} = \frac{19}{10}.

Clearly there’s a lot to look at here, so hopefully this is not the last time you will read about relative densities on this blog!

 

Thanks for reading,

Jonathan Gerhard

Posted in MathematicalADD, Random | Tagged , , , , , , , , | Leave a comment

Integration By Parts Table Method and Strange Sums

Today, I will be writing about calculus. How many of you have heard of the table method for integration by parts? Not many, if my tutees at the Science and Math Learning Center are any indication. If it sounds unfamiliar, then let me show you something awesome! Otherwise, stick around to the end and we’ll put a slight twist in the idea and see what happens…

Table Method

Suppose we wanted to solve the following integral:

\int x^3\sin(x) dx.

The usual method would be integration by parts. We set u = x^3 and dv = \sin(x)dx, and find du = 3x^2dx and v = -\cos(x). Then

\int x^3\sin(x)dx = (x^3)(-\cos(x)) - \int (3x^2)(-\cos(x))dx.

Then we do it again, letting u = 3x^2 and dv = -\cos(x)dx, so that du = 6x and v = -\sin(x). Therefore

\int x^3 \sin(x)dx = (x^3)(-\cos(x)) - ((3x^2)(-\sin(x)) - \int(6x)(-\sin(x))dx).

Again, we’ll let u = 6x and dv = -\sin(x), so du = 6dx and v = \cos(x), and

\int x^3 \sin(x)dx = (x^3)(-\cos(x)) - (3x^2)(-\sin(x)) + (6x)(\cos(x)) - \int(6)(\cos(x))dx

= (x^3)(-\cos(x)) - (3x^2)(-\sin(x)) + (6x)(\cos(x)) - (6)(\sin(x)).

Phew. The way I wrote out this integral in the end is no accident – it reveals a pattern that becomes the Table Method. Let D^k(f) be the k^{th} derivative of f and I^k(f) the k^{th} integral of f. Then we can rewrite the above as

{\small \int x^3\sin(x)dx = D^0(x^3)I^1(\sin(x)) - D^1(x^3)I^2(\sin(x)) + D^2(x^3)I^3(\sin(x)) - D^3(x^3)I^4(\sin(x))}.

Let’s do it in general. We set up a table, with D (for differentiation) in the first column and I (for integration) in the second column. We choose the “u” term (the one we would be taking the derivative of) and put it in the first spot in the D column. Then we put the “dv” term (which we’ll be integrating) in the first spot in the I column.

Then we keep taking derivatives of the term in the D column, placing each derivative in the next row of the D column, until we reach 0. Then we keep taking integrals of the term in the I column, also placing them in successive rows of the I column.

Screen Shot 2017-05-16 at 1.00.26 AM

Then we alternate labeling the rows with + and -, always starting a +. Then we take the product of the term in the k^{th} row of D and the (k+1)^{st} row of I and add these together over all rows, taking into account the sign we just put next to that row.

Screen Shot 2017-05-16 at 1.00.44 AM

This allows us to quickly see that

\int x^3 \sin(x)dx = (x^3)(-\cos(x)) - (3x^2)(-\sin(x)) + (6x)(\cos(x)) - (6)(\sin(x)).

At this point, you could leave and employ the table method at your will, excited to have a quick shortcut for integration by parts in your toolkit. Or you could read on to see how we can use this method to produce strange sums, like Grandi’s Series 1 – 1 + 1 – 1 + … = 1/2.

Strange Sums

Notice an important aspect of the previous example was that the derivative of x^3 was eventually zero. Integration by parts tells us that if D^t(f) = 0 for some t \in \mathbb{N} (and therefore all k \geq t), then

\int fg~dx = \sum\limits_{k = 0}^{\infty} (-1)^kD^k(f)I^{k+1}(g),

But what if we didn’t force the derivative of f to eventually be zero? Let’s start simple, \int e^xdx. Let f = e^x and g = 1. Then our table would be (extending infinitely downward)

Screen Shot 2017-05-16 at 4.33.15 PM

If the table method worked out, we’d have

\int e^x dx = \sum\limits_{k = 0}^{\infty} (-1)^ke^x\frac{x^{k+1}}{(k+1)!}

= -e^x\sum\limits_{k = 0}^{\infty}(-1)^{k+1}\frac{x^{k+1}}{(k+1)!}

= -e^x\sum\limits_{k = 0}^{\infty}\frac{(-x)^{k+1}}{(k+1)!}

= -e^x(-1 + e^{-x})

= e^x - 1

But we know the lefthand side is e^x, so we find that

e^x = e^x - 1,

or 0 = -1. That makes no sense…let’s try again. This time, let’s solve \int e^{2x}dx by setting f  = g = e^x. Then our table becomes

Screen Shot 2017-05-16 at 4.39.24 PM

and we find

\int e^{2x}dx = \sum\limits_{k = 0}^{\infty} (-1)^ke^xe^x = e^{2x}\sum\limits_{k = 0}^{\infty}(-1)^k.

The lefthand side we know to be (1/2)e^{2x}, so canceling out the e^{2x} from both sides gives us

\frac{1}{2} = \sum\limits_{k = 0}^{\infty} (-1)^k.

This is exactly Grandi’s series! So the first attempt massively failed, whereas the second produced an interesting well-known series. What other interesting things can you come up with using the table method?

Thanks for reading,

Jonathan Gerhard

Posted in MathematicalADD, Random | Tagged , , , , , | Leave a comment

The Picture Post

This post will consist almost entirely of pictures of the objects created during the past year while using my Shapeways Education Grant. Check out my summary post for more words…

The 3 by 3 Rook’s Graph

The 4 by 4 Rook’s Graph

The Shrikhande Graph modeled to look like the 4 by 4 Rook’s Graph

The “Drum” Shrikhande Graph

The “Playground” Shrikhande Graph

The “Straight Edge” Shrikhande Graph

NOTE: The four different-looking versions of the Shrikhande Graph above are all isomorphic!

A book embedding of K5


The Perko Pair on a Praxinoscope

Thanks for reading,

Jonathan Gerhard

Posted in 3D-Printing, MathematicalADD | Tagged , , , , , , , , , , | 1 Comment

3D Printing Graphs, Knots, and…Books?

It’s just about at that point where I am hit with nostalgia of the past year and excitement for the coming year. This time around, however, that excitement is accompanied by sadness – I will be graduating from James Madison University and leaving many of the good friends I’ve made here to continue on in my graduate studies.

But this post isn’t to dwell on such things; instead, I wanted to do a recap of something which has brought me great joy in the past year – my project on 3D printing mathematical objects!

Update (5/16/17): Check out my picture post to see high resolution pictures of all the objects I printed during this project!

The Start of an Idea

Здравствуйте! That means “Hello” in Russian – but forget that, I’m sure we can go with a much more informal Привет at this point! I was in the Math in Moscow program in the Spring of 2016, taking courses in Algebraic Geometry, Commutative and Homological Algebra, and Algebraic Topology. It was there that I learned about a fascinating concept: homotopy equivalence. Two topological spaces are considered homotopy equivalent if there exists a homotopy between them…just kidding – that’s a ridiculous (albeit technically correct) definition! Think about topological spaces as geometric objects, a ball, a square, a donut, or even your shoe. Then one object is homotopy equivalent to another if you can shape the first – bending, stretching, or shrinking, but never cutting or glueing – into the second. Note this can transcend dimension, a 3D ball is homotopy equivalent to a point! The classic example of a two objects that are homotopy equivalent is a donut and a coffee cup (see below). More information is given in my first blog post on this project.

mug_and_torus_morph

Upon learning about some wildly unintuitive homotopies (one of my favorites is that a 3-sphere minus a torus is homotopy equivalent to the disjoint union of two solid tori), I had the idea to utilize 3D printing. I had done a project on 3D printing knot invariants with Laura Taalman at JMU my freshman year (final results pictured below), so I thought it was only right to finish off my undergraduate career by doing another 3D printing project with her.

conformations_all_preview_featured

She encouraged me to apply for the Shapeways Education Grant, which I was very happy to end up receiving. I want to thank Shapeways for this grant that allowed me to utilize their fantastic resources and focus on the designing process instead of the troubleshooting that is so often necessary when working with 3D printers! Once the year began, we got started!

Let the Printing Began!

We wanted to start off simple: a sphere minus a point is homotopy equivalent to a single point. We used Mathematica to create an animation of this homotopy, but I quickly became enamored with Fusion 360. Though Mathematica 11 has some nice 3D printing features built into it, you can attribute the decision to switch to my laziness – Fusion 360 had such a nice user interface with a ton of very useful (and easy to learn) tools.

I began to design a whole array of objects: The Perko Knots on a praxinoscope, the Rook’s Graph and (in the case of n = 4) the strangely non-isomorphic Shirkhande Graph, all the while making time for some interesting mathematics. As the fall semester wore on, we approached the first major capstone of this project: An invited talk at the 2017 Joint Mathematics Meetings in January. Not only was it a thrill to be able to share what I had been working on, but I found it to be an invaluable experience to learn from some of the people I consider to be pioneers in the field of mathematical 3D printing.

Let the Printing End!

Alas, this cannot be a complete summary, for the year has not finished! But I will wrap up this post by describing what my plans for the end of the semester are. The math community here at JMU has given so much to me, so I thought it would be nice for the second capstone of my project to involve giving back to that community.

BookEmbedding

Book embedding of the complete graph on five vertices

The idea of a book embedding of a graph is, in my opinion, very interesting. The entire concept deals not necessarily with some abstract graph, but investigates the possibilities of embedding that graph in what looks like a – well – book! I believe my final goal is a perfect mix of mathematics and 3D printing – I want to

  1. Design and 3D print book embeddings of the Rook’s Graph and the Shrikhande graph, and
  2. Characterize the book thickness of these two graphs, i.e. the minimal number of pages needed to embed the graph.

I want to give these last two prints to the JMU math department to hang in the lobby of Roop Hall, along with a plaque describing the mathematical wonders behind both the graphs and their book embeddings.

Once I finish that, I will finally feel like I put the Shapeways Education Grant to good use! I wanted to finish by again thanking Shapeways for that grant, thanking Laura for agreeing to work on this project with me, and of course, thanking you the reader. I hope you enjoyed!

Jonathan Gerhard

Posted in 3D-Printing, MathematicalADD | Tagged , , , , , , , , , , , , , | 4 Comments

On the Book Embeddings of the Rook’s Graph and the Shrikhande Graph

I’ve already 3D printed one book embedding, which was of K_5, the complete graph on 5 vertices, which looks like this:

Just so we’re all on the same page, let me define a book embedding again. An n-page book embedding of a graph G is

  1. A embedding of the vertices of G along a “spine” in some order.
  2. An embedding of the edges onto the pages such that each edge is contained in exactly one page, and no two edges cross.

The book thickness of a graph G is the minimal number of pages needed to have a book embedding of G. We usually denote this as BT(G). Notice the order of the vertices on the spine could change the book thickness, so if we fix a printing cycle \sigma = (v_1, \dots, v_k), then we define BT(G, \sigma) to be the minimal number of pages needed to have a book embedding of G with the vertices embedded on the spine in the order given by \sigma. Thus,

BT(G) = \min_{\sigma} BT(G, \sigma).

A simple example of the printing cycle affecting book thickness is with the square. Label its vertices starting at the top-left and going clockwise with 1,2,3,4. Then notice that

BT(G, (1,2,3,4)) = 1

but

BT(G, (1,3,2,4)) = 2.

We present two goals, one of which will be explored here in slightly more detail, and the other will be explored more later.

The 4 \times 4 Rook’s graph R_4 and the Shrikhande graph S are both strongly-regular on the same parameters (16,6,2,2) but are not isomorphic. The classic way of seeing this is that the neighborhood of a vertex of R_4 is two disjoint triangles, whereas the neighborhood of a vertex S is a 6-cycle. Certainly, the neighborhood of a vertex influences possible book embeddings, which leads to our first goal.

Goal 1) Determine the book thickness of both the Rook’s graph and the Shrikhande graph. If they are different, then this would be a cool new way to distinguish them!

The second goal is closer aligned with my 3D printing project. Since the beginning, I have been interested in creating a 3D printed book embedding of some graph to give to the JMU math department so that they can display it in the lobby of Roop Hall. This leads to the second goal.

Goal 2) Create 3D models of a book embedding of both of R_4 and S, such that each model actually looks like a book. Then, in addition to being a cool object to have in the lobby of Roop Hall, these could be used to show the various people coming through about Graph Theory and how cool it is! I’d like to make up a little plaque that details the similarities and differences of these graphs, and how you might see that in the book embeddings.

Goal One

I’ll expand a bit more on the first goal. There are three results I think will be useful, all of which come from a paper by Frank Bernhart entitled “The book thickness of a graph”. The first is

Theorem 1) Given a graph G on p vertices and q edges, we have

BT(G) \geq \lceil\frac{q - p}{p - 3}\rceil.

Theorem 2) Let \theta(G) be the smallest number of planar graphs whose union is G. Then

\theta(G) \leq \lceil \frac{\textbf{BT}(G)}{2} \rceil.

Theorem 3) We have

BT(G) \leq 1 if and only if G is outerplanar.

BT(G) \leq 2 if and only if G is a subgraph of a Hamiltonian planar graph.

We’ll demonstrate how these are useful by showing that BT(R_3) = 3.

Recall that R_3 is the 3 \times 3 Rook’s graph, and so it has 9 vertices and 18 edges. Then by Theorem 1,

BT(R_3) \geq \lceil\frac{18 - 9}{9 - 3}\rceil = 2.

But by Theorem 3, we know that \textbf{BT}(R_3) \geq 3, since R_3 isn’t planar, and therefore can’t be a subgraph of a Hamiltonian planar graph.

Another way we could’ve achieved this bound is by Theorem 2. Notice that \theta(R_3) = 2, so

2\leq \lceil \frac{\textbf{BT}(R_3)}{2} \rceil

which means BT(R_3) must be at least 3. Finally, it isn’t too difficult to provide a book embedding of R_3 with three pages, cementing the fact that BT(R_3) = 3. We hope to go into similar analysis for R_4 and the Shrikhande graph in a later post. The next post will address Goal 2 and detail some progress in creating the actual 3D models of these book embeddings!

Thanks for reading,

Jonathan Gerhard

Posted in 3D-Printing, MathematicalADD, Random | Tagged , , , , | 2 Comments