# MPI III: Domain decomposition, continued

Today's [assignment](https://classroom.github.com/a/Y9wVURC8).

## Solving the wave equation

To make things a bit more realistic and interesting, I did some more coding to turn the derivative calculation from just a test into a
somewhat more realistic application, a solver for the wave equation

$$
\frac{\partial u}{\partial t} + c \frac{\partial u}{\partial x} = 0.
$$

I picked the wave equation because it makes uses of the `derivative()` function
that calculates a central-difference approximation to
$\frac{\partial u}{\partial x}$. In order to be able to reuse this function, I
moved it out of `test_derivative.cxx` into a separate source file
`derivative.cxx`, and then I implemented the wave equation solver.

You can follow the history of the commits in the assignment repo to see how I did the work. I started out from the serial `test_derivate.cxx` example and turned that into a wave equation solver, and only then did I do the parallelization work (last class's homework). Generally speaking, that's often a good approach to parallel programming -- get the serial code working first, and then parallelize it. With MPI, however, it's often a good idea to at least have the skeleton of the parallel code in place from the beginning, even if it doesn't do anything yet, so that you can make sure that the parallelization is properly integrated into the code from the start. That's because with MPI, you can't really easily parallelize a working serial code one step at a time, like you can with OpenMP. With OpenMP, you can just add a `#pragma omp for` here and there, and the code will still run, just not in parallel. With MPI, you have to have the whole parallel structure in place, and if you try to just add some MPI calls here and there to an otherwise serial code, it will likely not work at all.

I started out by splitting of the calculation of the derivative into a separate function, and then I implemented the wave equation solver in a separate source file `wave_equation.cxx`. I then proceeded with the steps to parallelize the derivative calculation (last class's homework), which involved decomposiing the domain, dealing with output, and filling ghost cells appropriately. I got the ghost cells working specifically for running with 2 MPI processes, then I went to 4 MPI processes, and finally I got it working with any number of MPI processes (as long as the problem size is divisible by the number of processes, which is a common requirement for domain decomposition). 


### Your turn

- Parallelize the wave equation.

  It turns out that once you (or, in this case, I) got the derivative function parallelized, one
  actually pretty much already did all the work required to parallelize the
  whole wave equation solver -- it's mostly boilerplate-type stuff
  that was done in `test_derivative.cxx` that needs to be repeated for
  `wave_equation.cxx`.

- (optional) Having to redo and duplicate code is something that one usually
  tries to avoid, so the current situation provides an opportunity to practice
  abstracting / consolidating code that is repeated between `test_derivate.cxx`
  and `wave_equation.cxx`. We've already done the most important part, having
  the `derivative()` function in a separate file where it can be used by both
  codes. But more can be done, so feel free ;).

  Note: There's a point where too much abstraction / generalization adds so much
  complexity that it's not worth it. YMMV.

  Another note: I would advise against trying to consolidate the `MPI_Init()` /
  `MPI_Finalize()` calls. These are usually best kept in the `main()` program,
  accepting that there's two lines of repeated code.
