Te Tari Pāngarau me te Tatauranga
Department of Mathematics & Statistics

Archived seminars in Mathematics

Seminars 1 to 50

Next 50 seminars
Investigating the neighbourhood of a heterodimensional cycle

Gemma Mason

University of Auckland

Date: Tuesday 10 October 2017

We investigate a four-dimensional ordinary differential equation model for intracellular calcium dynamics. This model exhibits a heterodimensional cycle, which is a heteroclinic connection between two saddle-periodic orbits whose corresponding stable manifolds are of different dimensions. This heterodimensional cycle is a hallmark of nonhyperbolic dynamics, an exciting frontier in theoretical dynamical systems. Observing it in a model with practical applications opens up new connections between applied mathematics and this emerging theoretical knowledge. We wish to examine how the heterodimensional cycle in the flow of this differential equation affects the nearby dynamics. To observe the behaviour of this practical example, we require computational tools and visualisation techniques for invariant manifolds of periodic orbits in four dimensions. We present a three-dimensional Poincaré section, which provides an overview of the model's behaviour. Projections of the four-dimensional flow into three dimensions are then used to explain the behaviour of objects in the Poincaré section in more depth.
Project presentations

Honours and PGDip students

Department of Mathematics and Statistics

Date: Friday 6 October 2017

Jodie Buckby : ~~Model checking for hidden Markov models~~
Jie Kang : ~~Model averaging for renewal process~~
Yu Yang : ~~Robustness of temperature reconstruction for the past 500 years~~

Sam Bremer : ~~An effective model for particle distribution in waterways~~
Joshua Mills : ~~Hyperbolic equations and finite difference schemes~~
Gems of Ramanujan and their lasting impact on mathematics

Ken Ono

Emory University; 2017 NZMS/AMS Maclaurin Lecturer

Date: Thursday 5 October 2017

##Note venue of this public lecture##
Ramanujan’s work has had a truly transformative effect on modern mathematics, and continues to do so as we understand further lines from his letters and notebooks. In this lecture, some of the studies of Ramanujan that are most accessible to the general public will be presented and how Ramanujan’s findings fundamentally changed modern mathematics, and also influenced the lecturer’s work, will be discussed. The speaker is an Associate Producer of the film ~~The Man Who Knew Infinity~~ (starring Dev Patel and Jeremy Irons) about Ramanujan. He will share several clips from the film in the lecture.

Biography: Ken Ono is the Asa Griggs Candler Professor of Mathematics at Emory University. He is considered to be an expert in the theory of integer partitions and modular forms. He has been invited to speak to audiences all over North America, Asia and Europe. His contributions include several monographs and over 150 research and popular articles in number theory, combinatorics and algebra. He received his Ph.D. from UCLA and has received many awards for his research in number theory, including a Guggenheim Fellowship, a Packard Fellowship and a Sloan Fellowship. He was awarded a Presidential Early Career Award for Science and Engineering (PECASE) by Bill Clinton in 2000 and he was named the National Science Foundation’s Distinguished Teaching Scholar in 2005. In addition to being a thesis advisor and postdoctoral mentor, he has also mentored dozens of undergraduates and high school students. He serves as Editor-in-Chief for several journals and is an editor of The Ramanujan Journal. He is also a member of the US National Committee for Mathematics at the National Academy of Science.
Jensen polynomials for Riemann's Xi-function and suitable arithmetic sequences

Ken Ono

Emory University; 2017 NZMS/AMS Maclaurin Lecturer

Date: Thursday 5 October 2017

##Note day and time of this seminar##
In 1927 Pólya proved that the Riemann Hypotheses is equivalent to the hyperbolicity of Jensen polynomials for Riemann’s Xi-function. This hyperbolicity has been proved for degrees $d\leq 3$. We obtain an arbitrary precision asymptotic formula for the derivatives $\Xi^{(2n)}(0)$ which allows us to prove the hyperbolicity of 100% of the Jensen polynomials of each degree. We obtain a general theorem which models such polynomials by Hermite polynomials. This theorem also allows us to prove a conjecture of Chen, Jia, and Wang on the partition function.
This is joint work with Michael Griffin, Larry Rolen and Don Zagier.
Gravitational collapse, cosmic censorship and the Penrose inequality: A geometrical perspective

Jörg Frauendiener

Department of Mathematics and Statistics

Date: Tuesday 26 September 2017

In 1965 Penrose proved the first singularity theorem for gravitational fields which states that under rather general conditions too much gravitational matter and energy confined into a small volume will contract and ultimately collapse to a singularity. He formulated the cosmic censorship hypothesis according to which such singularities have no influence on the surrounding world, they are hidden behind a horizon. Assuming the validity of his hypothesis he was later able to derive an inequality relating the area of the horizon with the total mass contained in space. If there were situations in which the Penrose inequality was violated then it would show that there is no cosmic censorship. Until today, no violations were found, but it has also not been possible to give an independent proof of the Penrose inequality in full generality.
In this talk the origin of the Penrose inequality and some attempts and special cases of its proof will be discussed in more detail.
The nonlinear stability of Minkowski spacetime for self-gravitating massive field

Philippe LeFloch

Université Pierre et Marie Curie

Date: Tuesday 19 September 2017

##A joint seminar with the Department of Physics##
This lecture will present recent work on a class of partial differential equations arising in mathematical physics in the context of Einstein's theory of gravity. Specifically, I will consider the question of the nonlinear stability of Minkowski spacetime and I will review the global evolution problem for self-gravitating massive matter. The presentation will kept at an introductory level, accessible to students and non-experts.
Waves around us. An applied mathematician's perspective

Philippe LeFloch

Université Pierre et Marie Curie, visiting William Evans fellow

Date: Wednesday 13 September 2017

##Note time and venue of this public lecture##
Waves surround us, and many technological advances were made possible only because engineers, physicists, and applied mathematicians worked together in order to understand these phenomena. Understanding shock waves was essential to design the modern airliners, which we use to travel. Understanding electro-magnetic waves propagating in space (and time!) was essential to design the GPS global navigation system and allow us to use cell phones. In this lecture, from the perspective of an applied mathematician, I will illustrate with examples the role of mathematics in overcoming practical problems using pioneering works by Leonhard Euler, James Maxwell, and Albert Einstein. Blog:
The random wave model in quantum chaos

Melissa Tacy

Department of Mathematics and Statistics

Date: Tuesday 12 September 2017

As the energy of a quantum system increases we are able to see echoes of the behaviour of the analogous classical system. Of particular interest are those systems whose classical analogue is chaotic. It is conjectured that such systems cannot display any phase space concentration. In the 70s Berry introduced the random waves as a model of quantum chaotic billiards. In this talk I will discuss the conjectures for general quantum systems and some of the recent results on random waves.
A computable error bound for MCMC with a proxy

Richard Norton

Department of Mathematics and Statistics

Date: Tuesday 5 September 2017

The aim is to find the expectation of a function of a random variable, given an unnormalized density function for the random variable.

Markov chain Monte Carlo (MCMC) methods compute a sequence of correlated samples of the random variable, and estimate the expectation by an average over the samples. The error of computing finitely many samples is estimated by computing estimates of the integrated autocorrelation time and variance. Generally, to be accurate, MCMC requires cheap evaluations of the density function.

When the density function is costly (in CPU time) to evaluate then we can try replacing it with a 'proxy' that is cheap to evaluate and perform MCMC on the proxy. But what error does this introduce?

I will present a computable upper bound for this error and apply it to a couple of simple examples.
Mudflow resuspension in estuaries: from shear instabilities to mixing layer

Alice Harang

Department of Marine Science

Date: Tuesday 22 August 2017

Mudflow ressuspension is a key process for sediment transport in estuaries. This numerical study focuses on the behavior of the lutocline (mudflow-water interface) in a shear flow and aims to better understand the mechanism of resuspension of cohesive sediment. Mud flow, or mud partially consolidated, is modeled by an equivalent homogeneous fluid miscible in water, with newtonian or viscoplastic rheology. The focus of the study is on the stability of the interface and the transition to a turbulent mixing layer through shear instabilities. The specificity of this flow lies on the large viscosity of the mud and its threshold to be put in motion, when presenting a viscoplastic feature. A linear stability study assesses the influence of the various parameters of the flow. Then, the non-linear evolution of the flow is studied by using direct numerical simulations (DNS).
Regular random field solutions for stochastic evolution equations

Markus Antoni

Department of Mathematics and Statistics

Date: Tuesday 15 August 2017

In this talk we give an introduction to the theory of stochastic integrals with respect to a Brownian motion β and stochastic evolution equations in $L^p$ spaces. Starting with the stochastic heat equation we uplift the setting to an abstract level and elaborate strategies to find solutions for stochastic equations. More precisely, for a closed operator $A$ on $L^p(U)$ (e.g. the Laplace operator) we consider equations of the form
d X(t) + A X(t) d t = F(t,X(t)) d t + B(t,X(t)) d β(t)
for random fields $X \colon \Omega \times [0,T] \times U \to \mathbb{R}$, where $[0,T]$ is a time interval, $(\Omega,\mathcal{F},\mathbb{P})$ a measure space representing the randomness of the system, and $U$ is typically a domain in $\mathbb{R}^d$ (or again a measure space). We reduce the existence and uniqueness of solutions to a fixed point equation in certain fixed point spaces. To be more precise, we look for mild solutions so that $X(\omega,\cdot,\cdot)$ has values in $L^p(U;L^q[0,T])$ for almost all $\omega \in \Omega$ under appropriate Lipschitz and linear growth conditions on the nonlinearities $F$ and $B$. In contrast to the classical semigroup approach, which gives $X(\omega,\cdot,\cdot) \in L^q([0,T];L^p(U))$, the order of integration is reversed. In combination with concrete examples of stochastic partial differential equations we show that this new approach leads to strong regularity results in particular for the time variable of the random field $X(\omega,t,u)$, e.g. pointwise Hölder estimates for the paths $t \mapsto X(\omega,t,u), \mathbb{P}$-almost surely.
Applications of generalized Polynomial Chaos in ocean wave / sea ice interactions

Johannes Mosig

Department of Mathematics and Statistics

Date: Tuesday 8 August 2017

In this talk I will present a group of computational techniques known as "generalized polynomial chaos" (gPC). These methods can be used to estimate the response of a physical model when parameters are uncertain. Specifically, when some input parameter R of a system is known only in terms of its probability distribution, then gPC methods can be used to efficiently calculate the moments (e.g. expectation and variance) of the output quantities of the model. The results of the gPC methods typically converge exponentially with the number of single model runs / system size.

I will present my Mathematica package which makes it easy to work with gPC methods on any physical model, and demonstrate a few applications in the area of ocean wave/sea ice interactions.
Governing equations for two-state semi-Markov processes: modelling reactive transport in a river with hyporheic exchange

Boris Baeumer

Department of Mathematics and Statistics

Date: Tuesday 1 August 2017

We model the movement of particles through two zones (say river and river bed), where the (random) time spent in one zone (river bed) is not exponentially distributed (the longer a particle is in the zone, the less likely it is that it will be released within the next ten minutes). We take a general probabilistic approach to deduce a system of deterministic partial differential equations (Fokker-Planck) modelling the evolution of the concentration profile. We then take great care to extend this system to incorporate reaction and stochasticity and provide numerics to approximate the solutions.
Singular solutions of the Einstein-Euler equations. PDE structure and physical interpretation

Florian Beyer

Department of Mathematics and Statistics

Date: Tuesday 23 May 2017

Einstein’s equations can be considered as a particular geometric evolution system where the underlying PDEs are essentially of nonlinear wave equation type. Similar to other famous geometric evolution problems, there has been a lot of effort over the last decades to study the formation of singularities of solutions. In general this turns out to be a formidable (and essentially unsolved) task due to the complexity of Einstein’s equations. In this talk I will discuss a recent result by P LeFloch and myself regarding singular solutions of the coupled Einstein-Euler equations. I will explain the analytic PDE aspects of this result and link it to the physical problem of modelling the very early universe.
Theory, modeling, and impact of the floe size distribution of sea ice

Chris Horvat

Harvard University

Date: Tuesday 2 May 2017

Earth's sea ice cover is composed of a myriad of distinct pieces, known as floes, with sizes ranging over many orders of magnitude. The evolution of Arctic climate and ecology is strongly tied to the multi-scale heterogeneity of sea ice, in particular the distribution of these floe sizes. Yet modern climate models still do not simulate the evolution of floes, their size distribution, or important quantities derived from this distribution.

I'll discuss how melt ponding on sea ice floes has dramatically shifted the ecological status quo in the Arctic. Using a combination of simple modeling techniques, observations, and reanalysis products I'll demonstrate that the thinning of sea ice in the past several decades allows for extensive and frequent under-ice phytoplankton blooms, which can have a significant effect on the ecological and carbon cycle in the high latitudes.

I'll also discuss how the thermodynamic evolution of sea ice is determined through the interaction of sea ice floes and ocean eddies, and how these are determined by the floe size distribution. I'll then present a predictive model for the joint statistical distribution of floe sizes and thicknesses (FSTD) which is tested under different forcing scenarios to establish its conservation properties and demonstrate its usability in future climate studies.

Suggested reading:
Horvat, Rees Jones, Iams, Schroeder, Flocco, & Feltham (2017). The frequency and extent of sub-ice phytoplankton blooms in the Arctic Ocean. Science Advances.
Horvat, Tziperman, & Campin (2016) Interaction of sea ice floe size, ocean eddies, and sea ice melting. Geophys. Res. Lett.
Fully pseudospectral time evolution

Jörg Hennig

Department of Mathematics and Statistics

Date: Tuesday 11 April 2017

We demonstrate how time-dependent PDE problems can numerically be solved with a fully pseudospectral scheme that uses spectral expansions with respect to both spatial and time directions. We obtain highly accurate numerical solutions for a moderate number of grid points and observe spectral (i.e. exponential) convergence, a feature known from solving elliptic PDEs with spectral methods. We briefly discuss several applications of this method that we studied in the past couple of years: numerical verification of a universal black hole formula, simulation of Newtonian stellar oscillations, and computation of Gowdy-symmetric cosmological models. Afterwards we look in more detail at an ongoing joint project with Jörg Frauendiener, namely an investigation of the conformally invariant wave equation on a black hole background. This can be considered as a toy model for the propagation of gravitational waves. It turns out that certain logarithmic singularities form at the future boundary, but we still obtain solutions of good accuracy. We also describe how initial data can be chosen that avoid the leading-order logarithmic terms, which further improves the numerical accuracy.
Freedom for Modules

John Clark

Department of Mathematics and Statisitcs

Date: Tuesday 4 April 2017

Let $R$ be a ring with multiplicative identity, but not necessarily commutative. An additive abelian group $M$ is said to be a (left) $\textit{R-module}$ if there is a (left) multiplication map $R \times M\longrightarrow M$ (the product of $r$ from $R$ with $m$ from $M$ is denoted by $rm$) provided, for all $r, s\in R$ and all $m, n\in M$, we have
$$r(sm) = (rs)m, \; (r+s)m = rm + sm,\; \text{ and }\; r(m+n) = rm + rn.$$
Note that this definition generalises that of a vector space $M$ over a field $R$.

In the special vector space setting, linear algebra introduces the important concepts of linearly (in)dependent subsets and generating subsets of a vector space $M$. The same definitions carry over to modules. A pivotal result in linear algebra says that every vector space $M$ has a ${\bf basis}$, i.e., $M$ has a linearly independent subset $B$ which also generates $M$. Moreover any two bases of $M$ have the same size. This unique size, say $n$, is called the ${\bf dimension}$ of $M$ and we write $\dim(M) = n$.

In this talk we’ll look at how the transfer of these and some related results fail for some rings.
Effect of distributed surface roughness on the laminar-turbulent transition

Maciej Floryan

University of Western Ontario

Date: Thursday 30 March 2017

Note venue for this seminar
A joint seminar by the Departments of Mathematics and Statistics, and Physics

It has been recognized since the pioneering experiments of Reynolds in 1883 that surface roughness plays a significant role in the dynamics of shear layers. This is a classical problem in fluid dynamics but, nevertheless, its resolution is still lacking. Most of the efforts have been focused on experimental approaches that have resulted in a number of correlations but have failed to uncover the mechanisms responsible for the flow response. Theoretical analyses have also failed to provide a consistent explanation of the flow dynamics. As there are an uncountable number of possible geometrical roughness forms, the problem formulation represents a logical contradiction as it might not be possible to find a general answer to a problem that has an uncountable number of variations. The recent progress towards the theoretical resolution of this apparent contradiction will be discussed and recent results dealing with the problem of distributed surface roughness will be presented. The progress has hinged on the development of the immersed boundary conditions method and the reduced geometry concept. It will be shown that it is possible to propose a rational definition of a hydraulically smooth surface by invoking flow bifurcations associated with the presence of roughness. Successful resolution of roughness problems gives access to the design of surface roughness for passive flow control where drag reduction can be achieved either directly, through re-arrangement of the form of the flow that results in the reduction of the shear stress, or indirectly, through delay of the laminar-turbulent transition.
The geometry of diversities

David Bryant

Department of Mathematics and Statisitcs

Date: Tuesday 21 March 2017

The field of metric geometry was wrenched from mathematical obscurity 20 years ago by the discovery that some really hard optimisation problems in graph theory could be solved, at least approximately, by converting them into metric embedding problems. Recently, we have found that many celebrated results for metric embeddings can be generalised, in a satisfying way, to diversities. A diversity is like a metric except that it defines values for all finite subsets, not just all pairs. The mathematics of diversities may still be obscure, but we've found it to be rich and multifaceted, with connections and applications in all sorts of places.
Mass-loss due to gravitational waves

Vee-Liem Saw

Department of Mathematics and Statisitcs

Date: Tuesday 28 February 2017

The theoretical basis for the energy carried away by gravitational waves that an isolated gravitating system emits was first formulated by Hermann Bondi during the 1960s. Recent findings from looking at distant supernovae revealed that the rate of expansion of our universe is accelerating (Nobel Prize in Physics, 2011), which may be well-explained by sticking in a positive cosmological constant into the Einstein field equations for general relativity. By solving the Newman-Penrose equations (which are equivalent to the Einstein field equations), we generalise this notion of Bondi-energy and thereby provide a firm theoretical description of how an isolated gravitating system loses energy as it radiates gravitational waves, in a universe that expands at an accelerated rate. This is in line with the observational front of LIGO's first announcement in February 2016 that gravitational waves from the merger of a binary black hole system have been detected.
An entropy-based measure for comparing distributions

Rajeev Rajaram

Kent State University, Ohio

Date: Tuesday 7 February 2017

NOTE venue is not our usual
In this talk, I will develop an entropy-based measure called case-based entropy which can be used to compare the diversity of distributions. The measure is based on computing the support of a Shannon-equivalent equi-probable distribution. It also has the capacity to compare whole or parts of distribution in a scale-free manner. I will develop the main idea from scratch and will keep the talk accessible to graduate students and researchers alike. The utility of the measure is still being explored, but one of the latest uses that I found is its use in economics as a better method to compare income or wealth inequality than the Gini index, for example. I have also used the measure to compare the diversity of complexity in a variety of distributions from the velocities of galaxies to the energy distribution of Maxwell Boltzmann, Bose-Einstein and Fermi-Dirac distributions.
3D-Dynamic Geometry Systems for learning and instruction

Olaf Knapp

University of Education Weingarten, Germany

Date: Wednesday 25 January 2017

Note day, time and venue of this special seminar
3D-dynamic geometry systems (3D-DGS) with graphical user interfaces can open up a wide range of creative activities for mathematics education. Examples will be given from a research project into their use for didactic presentations, visualization, synthetic geometry, 3D modelling, morphing and mapping, design, and analogization. They allow interactive exploration of mathematical concepts. I will show their potential to become an integral part of mathematics education and to modify the curriculum. There is also the opportunity for hands-on experience with a 3D-DGS.
Extensions of the multiset sampler

Scotland Leman

Virginia Tech, USA

Date: Tuesday 8 November 2016

NOTE day and time of this seminar
In this talk I will primarily discuss the Multiset Sampler (MSS): a general ensemble based Markov Chain Monte Carlo (MCMC) method for sampling from complicated stochastic models. After which, I will briefly introduce the audience to my interactive visual analytics based research.

Proposal distributions for complex structures are essential for virtually all MCMC sampling methods. However, such proposal distributions are difficult to construct so that their probability distribution match that of the true target distribution, in turn hampering the efficiency of the overall MCMC scheme. The MSS entails sampling from an augmented distribution that has more desirable mixing properties than the original target model, while utilizing a simple independent proposal distributions that are easily tuned. I will discuss applications of the MSS for sampling from tree based models (e.g. Bayesian CART; phylogenetic models), and for general model selection, model averaging and predictive sampling.

In the final 10 minutes of the presentation I will discuss my research interests in interactive visual analytics and the Visual To Parametric Interaction (V2PI) paradigm. I'll discuss the general concepts in V2PI with an application of Multidimensional Scaling, its technical merits, and the integration of such concepts into core statistics undergraduate and graduate programs.
New methods for estimating spectral clustering change points for multivariate time series

Ivor Cribben

University of Alberta

Date: Wednesday 19 October 2016

NOTE day and time of this seminar
Spectral clustering is a computationally feasible and model-free method widely used in the identification of communities in networks. We introduce a data-driven method, namely Network Change Points Detection (NCPD), which detects change points in the network structure of a multivariate time series, with each component of the time series represented by a node in the network. Spectral clustering allows us to consider high dimensional time series where the number of time series is greater than the number of time points. NCPD allows for estimation of both the time of change in the network structure and the graph between each pair of change points, without prior knowledge of the number or location of the change points. Permutation and bootstrapping methods are used to perform inference on the change points. NCPD is applied to various simulated high dimensional data sets as well as to a resting state functional magnetic resonance imaging (fMRI) data set. The new methodology also allows us to identify common functional states across subjects and groups. Extensions of the method are also discussed. Finally, the method promises to offer a deep insight into the large-scale characterisations and dynamics of the brain.
Tuning of MCMC with stochastic autoregressive proposals

Richard Norton

Department of Mathematics and Statistics

Date: Tuesday 18 October 2016

The Metropolis-Hastings algorithm is a Markov chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution, usually to estimate an expected value. At each step, the algorithm proposes a new random sample, and then decides whether to accept or not. Efficiency depends on the computational cost per random sample, and the correlation of the sequence - a sequence of independent random samples is ideal.

I analyse the efficiency of Metropolis-Hastings algorithms with stochastic autoregressive proposals. These include many existing methods, such as the Metropolis-Adjusted Langevin Algorithm (MALA), the preconditioned Crank-Nicolson algorithm (pCN) and the Hybrid Monte Carlo algorithm (HMC). Previously, each of these algorithms required their own separate analyses. Using my analysis I can extend what is known about these algorithms as well as analysing new algorithms.
Inverse prediction for paleoclimate models

John Tipton

Colorado State University

Date: Tuesday 18 October 2016

NOTE day and time of this seminar
Many scientific disciplines have strong traditions of developing models to approximate nature. Traditionally, statistical models have not included scientific models and have instead focused on regression methods that exploit correlation structures in data. The development of Bayesian methods has generated many examples of forward models that bridge the gap between scientific and statistical disciplines. The ability to fit forward models using Bayesian methods has generated interest in paleoclimate reconstructions, but there are many challenges in model construction and estimation that remain.

I will present two statistical reconstructions of climate variables using paleoclimate proxy data. The first example is a joint reconstruction of temperature and precipitation from tree rings using a mechanistic process model. The second reconstruction uses microbial species assemblage data to predict peat bog water table depth. I validate predictive skill using proper scoring rules in simulation experiments, providing justification for the empirical reconstruction. Results show forward models that leverage scientific knowledge can improve paleoclimate reconstruction skill and increase understanding of the latent natural processes.
Ultrahigh dimensional variable selection for interpolation of point referenced spatial data

Benjamin Fitzpatrick

Queensland University of Technology

Date: Monday 17 October 2016

NOTE day and time of this seminar
When making inferences concerning the environment, ground truthed data will frequently be available as point referenced (geostatistical) observations accompanied by a rich ensemble of potentially relevant remotely sensed and in-situ observations.
Modern soil mapping is one such example characterised by the need to interpolate geostatistical observations from soil cores and the availability of data on large numbers of environmental characteristics for consideration as covariates to aid this interpolation.

In this talk I will outline my application of Least Absolute Shrinkage Selection Opperator (LASSO) regularized multiple linear regression (MLR) to build models for predicting full cover maps of soil carbon when the number of potential covariates greatly exceeds the number of observations available (the p > n or ultrahigh dimensional scenario). I will outline how I have applied LASSO regularized MLR models to data from multiple (geographic) sites and discuss investigations into treatments of site membership in models and the geographic transferability of models developed. I will also present novel visualisations of the results of ultrahigh dimensional variable selection and briefly outline some related work in ground cover classification from remotely sensed imagery.

Key references:
Fitzpatrick, B. R., Lamb, D. W., & Mengersen, K. (2016). Ultrahigh Dimensional Variable Selection for Interpolation of Point Referenced Spatial Data: A Digital Soil Mapping Case Study. PLoS ONE, 11(9): e0162489.
Fitzpatrick, B. R., Lamb, D. W., & Mengersen, K. (2016). Assessing Site Effects and Geographic Transferability when Interpolating Point Referenced Spatial Data: A Digital Soil Mapping Case Study.
Undergraduate numeracy at Otago University

Chris Linsell

College of Education

Date: Tuesday 6 September 2016

There have long been concerns about the numeracy of many undergraduates, based on anecdotal evidence. We now have data from two schools within the university that document the issue. This seminar will provide a definition of what is meant by numeracy at the university level and will describe issues around its assessment and the impact of current University Entrance requirements.
Constitutive modelling and unsteady flow of yield stress materials

Miguel Moyers-Gonzalez

University of Canterbury

Date: Tuesday 23 August 2016

Viscoplastic materials such as Bingham or Heschel Bulkley fluids have a yield stress which must be exceeded before they flow. Their behaviour lies somewhere between liquids and solids. In this talk I will first present a microscopic Gibbs field model that mimics the macroscopic yielding behaviour of a viscoplastic fluid as a means to qualitative validate our constitutive model. In the second part of the talk we look at laminar unsteady pipe flow of a Carbopol gel, which is a model viscoplastic fluid. By looking in detail at the solid-fluid transition of this material, we found a strong coupling between the irreversible deformation states and the phenomenon of wall slip (where the fluid behaves as if it is sliding along the wall).
Semiclassical analysis in PDE

Melissa Tacy

Australian National University

Date: Tuesday 23 August 2016

Note day, time and venue of this special seminar
Semiclassical analysis arose as a set of techniques for studying the high energy (or semiclassical) limit of quantum mechanics. These techniques have the advantage that intuition derived from the quantum-classical correspondence principle can guide our technical development. In this talk I will introduce some of the key techniques and discuss results such as the $L^{p}$ growth for products of Laplacian eigenfunctions and high energy phase space concentration estimates.
A fast iterative method for two-dimensional wave scattering by a large array of inclusions

Fabien Montiel

Department of Mathematics and Statistics

Date: Monday 22 August 2016

Note day and time of this special seminar
In a one-dimensional homogeneous medium, linear wave scattering by an array of inclusions, e.g. beads on a string, can be reduced to a multiple reflection/transmission problem, in which the reflected and transmitted waves by an inclusion become incident waves on the adjacent inclusions. Under time-harmonic conditions, fast iterative methods can be used to obtain the solution of this class of scattering problems. In a two-dimensional medium, however, such methods cannot be directly extended as there is no natural way of uniquely ordering a finite number of arbitrarily positioned inclusions, e.g. circles, in the plane. A semi-analytical method was devised to solve deterministically the scattering of time-harmonic waves by a large finite array of inclusions in two dimensions. The method consists of clustering the inclusions into adjacent parallel slabs. The solution is obtained by combining plane wave expansions of the scattered field by each slab and a fast iterative technique for slab-slab interactions similar to the one-dimensional method mentioned above.

In this talk, I will describe this so-called slab-clustering method (SCM) and demonstrate how it provides a convenient framework to analyse the evolution of a multi-directional wave field through a large random array of inclusions. I will consider several applications of the methods in acoustics and water waves science. In particular, I will discuss some model predictions based on the SCM that generated key insights into the directional properties of water wave fields propagating in ice-covered oceans.
Towards whole cell simulation

Mark Flegg

Monash University

Date: Wednesday 17 August 2016

Note day, time and venue of this special seminar
Biological cells are the fundamental building blocks of life. At a molecular level, a cell operates according to the hard mathematical laws of physics and chemistry. Encoded in the network of molecular interactions are robust mechanisms which collectively determine the properties of life itself. Mathematical insight into cell scale behaviour is fundamentally limited by the computational scalability and convergence of mathematical frameworks that are used to describe physical systems at molecular scales (both spatial and temporal). In this presentation, I will highlight the main problems with classical mathematical approaches used to study intracellular spatio-temporal environments and present multiscale methods I have developed in the last 5 years which have allowed for improved accuracy, and efficiency. The objective of this research is to lay mathematical foundations for progress in the highly interdisciplinary mission of whole cell simulation at the level of individual molecules, a goal which has been termed a `Grand Challenge of the 21st Century'. The mathematical content of this talk is rather varied, as is the nature of applied mathematics. This research draws on partial differential equation theory, perturbation theory, N-body theory, random walks and stochastic processes as well as a number of miscellaneous areas of mathematics.
Using the IBVP for the conformal field equations to study gravitational perturbations of a black hole

Chris Stevens

Department of Mathematics and Statistics

Date: Tuesday 16 August 2016

The aim of this talk is to introduce the initial boundary value problem (IBVP) for the conformal field equations (CFEs), and as an application study gravitational perturbations of a black hole space-time.

The CFEs are a different mathematical representation of Einstein's field equations that allow one to study “infinity” of a space-time without any sort of limiting procedure. This is of interest as in general relativity infinity is the only place that energy is well defined.

In this talk, the main ideas of the CFEs will be discussed, along with the issues associated with forming an IBVP for them. A framework for the IBVP will be presented and numerical evidence of its success will be given. As an application I will discuss the problem of shooting a gravitational wave into a black hole. In particular, I will discuss how the IBVP is formulated for this situation and how to calculate the so-called "Bondi-energy" at infinity. The resulting expression is found to reproduce the famous Bondi-Sachs mass loss.
Stochastic partial differential equations: regularity and approximation

Petru Cioica-Licht

Department of Mathematics and Statistics

Date: Monday 15 August 2016

Note day and time of this special seminar
Stochastic partial differential equations (SPDEs, for short) are mathematical models for evolutions in space and time, which are influenced by noise. They are aimed at describing phenomena in physics, chemistry, epidemiology, economics, and many other disciplines. Although we can prove existence and uniqueness of a solution to various classes of such equations, in general, we do not have an explicit representation of this solution. Thus, in order to make those models ready to use for applications, we need efficient numerical methods for approximating their solutions. And to determine the efficiency of an approximation method, we usually need to analyse the regularity of the target object, which is, in our case, the solution of the SPDE.

The aim of this talk is to present some recent results concerning the regularity of SPDEs and to point out their relevance for the question of developing efficient numerical methods for solving these equations. Before doing this, we first explain the meaning of the different parts of a typical SPDE. For simplicity, we focus on the most basic example, the stochastic heat equation driven by a (cylindrical) Wiener process. It arises from the common deterministic heat equation if we add what is called 'white noise'.
Partitions and the representation theory of the symmetric groups

Kay Jin Lim

Nanyang Technological University

Date: Wednesday 3 August 2016

The representation theory of the symmetric groups is closely related to combinatorial objects. For example, the simple ordinary and modular representations of the symmetric groups are parametrised by partitions and p-restricted partitions respectively. In this seminar, we discuss various relations between the properties of partitions and the structures of the classical modular representations of the symmetric groups. In particular, I will present a new combinatorial property which gives us the exact label of a signed Young module which is isomorphic to a simple Specht module.

This is a joint work with Susanne Danz.

Note day, time and venue of this special seminar
Convexity in free analysis

Igor Klep

University of Auckland

Date: Thursday 9 June 2016

Free analysis provides an analytic framework for dealing with quantities with the highest degree of noncommutativity, such as large random matrices. In the talk we will explore a natural extension of the notion of convexity to matrix spaces, the so-called matrix convex sets. We shall give an appropriate analog of the Hahn-Banach theorem and present some of its applications.
Using the finite volume method to do statistics

Richard Norton

Department of Mathematics and Statistics

Date: Thursday 2 June 2016

The finite volume method is a numerical method mainly used in computational fluid dynamics. It is well suited to solving conservation law PDEs because it can exactly preserve the conserved quantity. In this talk I describe how to use a finite volume method to approximate the Frobenius-Perron operator; an operator that describes the time evolution of a probability density function in a dynamical system. The finite volume method will be shown to exactly preserve two essential properties of probability density functions, they must integrate to 1 and be positive. The positivity preserving property is new for finite volume methods and requires a CFL condition is satisfied. The finite volume method can be used instead of the Kalman filter, and produces accurate results even when the probability density function is multi-modal.
Project presentations

Honours and PGDip students

Department of Mathematics and Statistics

Date: Friday 27 May 2016

Michel de Lange :Deep learning
Georgia Anderson : Probabilistic linear discriminant analysis
Nick Gelling : Automatic differentiation in R

15-MINUTE BREAK 2.40-2.55

Alex Blennerhassett : Toeplitz algebra of a directed graph
Zoe Luo : Wavelet models for evolutionary distance
Xueyao Lu : Making sense of the λ-coalescent
Terry Collins-Hawkins : Reactive diffusion in systems with memory
Josh Ritchie : Linearisation of hyperbolic constraint equations

CJ Marland : Extending matchings of graphs: a survey
This one mathematics project presentation takes place at 12 noon on Thursday 26 May, room 241
Classical Mathematical Conjectures as a motivator to mathematics research

Mike Hendy

Date: Thursday 12 May 2016

My interest in mathematics research was ignited by my secondary mathematics teacher. Robin Patterson introduced our class to some (then unsolved) Mathematical Problems. An examination of Euler's prime generating quadratics at High School was the seed for my first published paper 12 years later. Fermat's Last Theorem and the 4-colour conjecture motivated me through graduate study and steered me into Algebraic Number Theory for my PhD. Although I played no part in their subsequent proofs, I gained a lot of useful research techniques and knowledge from tinkering with them. Perhaps their greatest influence was in whetting my appetite for mathematics research. Hadamard's conjecture remains unsolved - but my investigation into that problem was pivotal to a breakthrough in modeling Molecular Phylogenetics.
In this seminar I will reflect on the role that each of these 4 problems had in my own career as a researcher in mathematics and give an outline of each problem. I hope others might also see that "playing" with such problems could be useful in motivating and training future mathematics researchers.
Stochastic partial differential equations: regularity and approximation

Petru Cioica-Licht

Department of Mathematics and Statistics

Date: Thursday 28 April 2016

Stochastic partial differential equations (SPDEs) are mathematical models for evolutions in space and time, which are corrupted by noise. Although we can prove existence and uniqueness of a solution to various classes of such equations, in general, we do not have an explicit representation of this solution. Thus, in order to make those models ready to use for applications, we need efficient numerical methods for approximating their solutions. And to determine the efficiency of an approximation method, we usually need to analyse the regularity of the target object, which is, in our case, the solution of the SPDE.
The aim of this talk is to present some recent results concerning the regularity of SPDEs and to point out their relevance for the question of developing efficient numerical methods for solving these equations. Before doing this, we first explain the meaning of the different parts of a typical SPDE. For simplicity, we focus on the most basic example, the stochastic heat equation driven by a (cylindrical) Wiener process. It arises from the common deterministic heat equation if we add what is called `white noise'.
Spatial transmission of 2009 pandemic influenza in the US

Julia Gog

University of Cambridge; NZMS Forder Lecturer

Date: Tuesday 5 April 2016

Detailed medical insurance claims data from the US in 2009 allow us to explore the spatial dynamics of a pandemic in greater depth than ever before. This talk will outline what we observed in terms of spatial and temporal dynamics of the pandemic in the US. Modelling work allows us to test hypothesis on the importance of different factors such as whether schools were in session, climate and city population size, to see which were important in determining the dynamics of disease spread.

Here I will also show results from ongoing studies with collaborators and some of the challenges. We have very fine-grained spatial data, and clearly we would like to us this but disaggregating too far leaves us with little signal. With fitted models and a bit of mathematical creativity, we can infer likely transmission routes during the pandemic and hypothesize what the phylogeography (spatial distribution of viral variants) might look like. Finally, looking at different age groups separately reveal a little more about why the pandemic wave was so slow.
Epidemics and viruses, the mathematics of disease

Julia Gog

University of Cambridge; NZMS Forder Lecturer

Date: Monday 4 April 2016

Mathematics is an essential tool for helping us understand and control infectious diseases, from the scale of a single virus particle through to a global pandemic. Using detailed data and the toolkit of mathematical modelling,we explore the 2009 influenza pandemic at a greater depth than was possible for any previous pandemic. The results are surprising. We know the modern world is astonishingly well connected internationally so things should spread quickly. However, influenza does not like to conform to our expectations!
Curiosities in burn-off chip-firing games

Mark Kayll

University of Montana

Date: Thursday 24 March 2016

Start by placing piles of indistinguishable chips on the vertices of a graph. A vertex can fire if it's supercritical; i.e., if its chip count exceeds its valency. When this happens, it sends one chip to each neighbour and annihilates one chip. Initialize a game by firing all possible vertices until no supercriticals remain. Then drop chips one-by-one on randomly selected vertices, at each step firing any supercritical ones. Perhaps surprisingly, this seemingly haphazard process admits analysis. And besides having diverse applications (e.g., in modelling avalanches, earthquakes, traffic jams, and brain activity), chip-firing reaches into numerous mathematical crevices. The latter include, alphabetically, algebraic combinatorics, discrepancy theory, enumeration, graph theory, stochastic processes, and the list could go on (to zonotopes). I'll share some joint work - with Dave Perkins - that touches on a few items from this list. The talk will be accessible to non-specialists, I promise.
Gravitational waves: a new window to the universe

Jörg Frauendiener

Date: Wednesday 23 March 2016

University of Otago Public Lecture
In 1916 Einstein predicted on the basis of his new theory of general relativity that gravitational waves should exist. Since the early 1960s scientists tried to measure them but the search has been unsuccessful until very recently. On the 14th of September 2015 the two LIGO detectors measured a gravitational wave signal which could only have come from a binary black hole system. What does this measurement mean for science and for us?
Development of a numerical model for the optimization of large offshore wave energy farms

Francesc Fàbregas Flavià

École Centrale de Nantes

Date: Thursday 17 March 2016

The optimization of wave energy farms requires numerical models aimed at predicting as accurately as possible the production of a single device on a given site over long periods (typically a year).

Such models have been developed as specialized software, generally using Boundary Element Methods (BEM), in the framework of the theory of potential flow for the description of wave/device interaction. They are globally efficient for the optimization of one device alone or a small group of devices under simplified and rather idealized conditions.

But now as we advance towards application to real cases of multiMW farms featuring, for instance, O(100) machines, these models can no longer be used for optimization and a new generation of fast-running computer codes must be developed.
Efficient computation of likelihoods of physical traits of species

Gordon Hiscott

Department of Mathematics and Statistics

Date: Thursday 3 March 2016

We present new methods for efficiently computing likelihoods of visible physical traits (phenotypes) of species which are connected by an evolutionary tree. These methods combine an existing dynamic programming algorithm for likelihood computation with methods for numerical integration. We have already applied these particular methods to a dataset on extrafloral nectaries (EFNs) across a large evolutionary tree connecting species of Fabales plants. We compare the different numerical integration techniques that can be applied to the dynamic programming algorithm. In addition, we compare our numerical integration results to the published results of a “precursor” model applied to the same EFN dataset. These results include not only likelihood approximations, but also changes in phenotype along the tree and the Akaike Information Criterion (AIC), which is used to determine the relative quality of a statistical model.
Wave propagation through irregular media

Hyuck Chung

Auckland University of Technology

Date: Tuesday 17 November 2015

In this talk, I will show solution methods for computing the wave field through irregular media. The waves considered here are water waves over irregular seabed, and sound waves scattered by cylindrical obstacles. The mathematical models for these waves use the linear wave theory, that is, Laplace's equation for velocity potential of the incompressible water and Helmholtz equation for the sound pressure in air. Applied mathematicians and engineers are interested in the wave attenuation by these irregular properties. I will show how to represent mathematically the irregular/random seabed geometry in water wave problems, and the mixed boundary conditions for the cylinders in scattering problems. Engineers have been using the finite element or boundary element method to study these wave propagation problems in finer and finer details with the increasing computing power. However, it is difficult to find the underlying physics from these numerical methods. Furthermore, most of the computing packages are black boxes. Applied mathematicians have been working to extend the linear wave theory to incorporate the irregularities. I will talk about the recent progress made by the applied mathematicians, with whom I have been working.

Project presentations, Maths honours students

Department of Mathematics and Statistics

Date: Friday 23 October 2015

2.00 : Calum Nicholson, Wavelets and direct limits
2.25 : Pareoranga Luiten-Apirana, Morita equivalence of Leavitt path algebras
2.50 : Tom McCone, Primitive ideals in graph algebras
Galerkin / Finite element methods for nonlinear and dispersive wave equations

Dimitrios Mitsotakis

Victoria University of Wellington

Date: Tuesday 6 October 2015

The equations describing water waves in ideal fluids, known as the Euler equations, appear to be exceedingly complex. Certain assumptions on the waves amplitude and wavelength lead to mathematical models that simplify considerably the mathematics involved. In this talk we consider a class of such models known as Boussinesq systems. Boussinesq systems are comprised of partial differential equations with nonlinear and dispersive terms. We review some theoretical properties of these models such as the existence and uniqueness of smooth solutions. We also present and analyse Galerkin / Finite element methods for their numerical solution. Galerkin methods appear to be very efficient for the approximation of smooth solutions of Boussinesq models in plane domains with complicated boundaries. Applications to the propagation of solitary waves and the generation and propagation of tsunamis are discussed.
Quantum computing and cellular phones

Robert Calderbank

Duke University

Date: Tuesday 29 September 2015

Coding theory revolves around the question of what can be accomplished with only memory and redundancy. When we ask what enables the things that transmit and store information, we discover codes at work, connecting the world of geometry to the world of algorithms. This talk will focus on those connections that link the real world of Euclidean geometry to the world of binary geometry that we associate with Hamming. It will include the mathematical framework for error correction in quantum computing and code design for wireless communication with multiple antennas.