The graduate program in Mathematical Sciences is hosting a Research Speaker Series showcasing the amazing faculty of the program, who will be discussing some of the hottest topics in the math industry today. It’s a great event to learn more about the mathematics program, and how it can be applied to the business setting.

The events will be held on two Wednesday evenings in February, the 5th and the 12th, from 6:00 to 7:30 p.m. at the College of Charleston North Campus off exit 15 on I-526.

Talk 1 and 2 will take place on February 5, 2020, and the topics include:

A Glimpse of Soliton Theory, by Alex Kasman

Although it started as an attempt to understand a particularly strange wave observed by a ship designer on a Scottish canal in 1834, the area of mathematics known as “soliton theory” has grown into one of the most interdisciplinary and fascinating areas of cutting-edge research today. It combines algebra, geometry and calculus and has applications in particle physics, biology, signal transmission, and of course also hydrodynamics. There is a large and active hub of researchers in this area at the College of Charleston. This brief talk will summarize the history of soliton theory and focus on the surprising connections between the different mathematical subjects that it has revealed.

Evolving Adaptive Coincidence-Detecting Neurons, by Garrett Mitchener

I will describe a computational experiment in which a selection-mutation process evolves neuron-like cells, combining evolutionary and biochemical dynamics. The simulated organisms, called agents, are designed to resemble single cells, each of which has an internal state consisting of counts of abstract molecules, plus a genome that specifies how they interact. These artificial reaction networks can perform any bit-wise computation. For this project, the goal is to start with random genomes and subject them to selective breeding, mutation, and recombination so that they evolve the ability to detect coincidences in a spike train, one of the essential timing-based computations performed by living neurons. When two input spikes arrive separated by a short delay, the agent should fire an output spike of its own, but when spikes arrive widely separated, the agent should produce no output spike. During the selective breeding process, agents are rated based on how well they process a variety of spike trains, and are more likely to survive and reproduce if they earn a high rating. Once the population has discovered good solutions to that basic coincidence detection task, agents are given an additional Hebbian learning task. After receiving many closely spaced spikes, they should fire more eagerly even when spikes arrive somewhat separated. After a period of low activity, they should fire more skeptically, only after spikes arrive very close together. The simulation generally succeeds, discovering genomes encoding reaction networks that transfer activity from input to output, but with feedback loops that inhibit the transfer and only allow it to succeed when input spikes are close. Some of these inhibitory reactions are themselves inhibited by sustained input activity, accomplishing the Hebbian learning task using a mechanism similar to that of NMDA receptors. The population maintains considerable genetic variation, and takes advantage of gene duplication as it evolves successful mechanisms.

Talk 3 and 4 will take place on February 12, 2020, and the topics include:

The Rankability of Data, by Amy Langville

This talk poses and solves a new problem, the rankability problem, which refers to a dataset’s inherent ability to produce a meaningful ranking of its items. Ranking is a fundamental data science task. Its applications are numerous and include web search, data mining, cybersecurity, machine learning, and statistical learning theory. Yet little attention has been paid to the question of whether a dataset is suitable for ranking. As a result, when a ranking method is applied to an unrankable dataset, the resulting ranking may not be reliable. The rankability problem asks the following: How can rankability be quantified? At what point is a dynamic, time-evolving graph rankable? If a dataset has low rankability, can modifications be made and which most improve the graph’s rankability? We present a combinatorial approach to a rankability measure and then compare several algorithms for computing this new measure. Finally, we apply our new measure to several datasets.

An Introduction to Optimal Stopping Theory, by Martin Jones

Optimal stopping problems involve observing a sequence of random variables in order. At each stage, the observer must decide whether or not to stop observing the sequence and receive the current observation as a reward, or to forgo the current observation and continue observing the sequence with no chance to recall passed over observations. In this talk we will look at some famous optimal stopping problems, some applications and several open problems in the area.

We hope to see you there!