All Seminars

Show:
Title: A computational model of drug delivery through microcirculation to compare different tumor treatment options
Seminar: Numerical Analysis and Scientific Computing
Speaker: Paolo Zunino of University of Pittsburgh
Contact: TBA
Date: 2014-03-27 at 4:00PM
Venue: MSC N304
Download Flyer
Abstract:
Starting from the fundamental laws of filtration and transport in biological tissues, we develop a mathematical model able to capture the interplay between blood perfusion, fluid exchange with the interstitial volume, mass transport in the capillary bed, through the capillary walls and into the surrounding tissue. These phenomena are accounted at the microscale level, where the capillary bed and the interstitial volume are viewed as two separate regions. The capillary bed is described as a network of vessels carrying blood flow. We complement the model with a state of art numerical solver, based on the finite element method. The numerical scheme is based on the idea to represent the capillary bed as a network of one-dimensional channels that acts as a concentrated source of flow immersed into the interstitial volume, because of the natural leakage of capillaries. As a result, it can be classified as an embedded multiscale method. We apply the model to study drug delivery to tumors. Owing to its general foundations, the model can be adapted to describe and compare various treatment options. In particular, we consider drug delivery from bolus injection and from nanoparticles, which are in turn injected into the blood stream. The computational approach is prone to perform a systematic quantification of the treatment performance, enabling the analysis of interstitial drug concentration levels, drug metabolization rates, cell surviving fractions and the corresponding timecourses. Our study suggests that for the treatment based on bolus injection, the drug dose is not optimally delivered to the tumor interstitial volume. Using nanoparticles as intermediate drug carriers overrides the shortcomings of the previous delivery approach. Being directly derived from the fundamental laws of flow and transport, the model relies on general foundations and it is prone to be extended in different directions. On one hand, we are planning to combine it with a poroelastic description of the interstitial tissue, in order to capture the interplay of mechanical deformations and transport phenomena. On the other hand, the model may be adapted in future to study different types of cancer, provided that suitable metrics are available to quantify the transport properties of a specific tumor mass.
Title: Forms of Toric Varieties
Seminar: Algebra
Speaker: Alex Duncan of University of Michigan
Contact: David Zureick-Brown, dzb@mathcs.emory.edu
Date: 2014-03-25 at 4:00PM
Venue: W302
Download Flyer
Abstract:
A toric variety is a special kind of compactification of a torus. A basic example of a toric variety is a projective space $\mathbb{P}^n$. Given an n+1-dimensional vector space $V$ there is a canonical projection map from $V$ (minus the origin) to $\mathbb{P}^n$. A construction of Cox generalizes this situation to toric varieties. I will introduce toric varieties, then show how one may use Cox’s construction to classify their forms over non-algebraically closed fields using Galois cohomology.
Title: Enabling Highly Accurate Large-Scale Phylogenetic Estimation
Seminar: General Colloquium
Speaker: Shel Swenson of University of Southern California
Contact: Steve Batterson, sb@mathcs.emory.edu
Date: 2014-03-25 at 4:00PM
Venue: W303
Download Flyer
Abstract:
Evolutionary histories of sets of molecular sequences are a fundamental tool in many biological and biomedical questions of societal importance, including biodiversity conservation, drug development, and even forensic investigations. The best methods for estimating these evolutionary histories, or phylogenetic trees, are based on NP-hard optimization problems, and thus phylogenetic analyses of large-scale datasets is extremely computationally intensive. The continually diminishing costs and increasing throughput of DNA sequencing technologies will lead to an ever greater demand for methods capable of producing accurate phylogenetic trees on complex, large-scale molecular datasets. In this talk, I will describe algorithms my collaborators and I have developed to address this demand. I will present SuperFine and ASTRAL, two divide-and-conquer approaches with desirable theoretical properties and excellent empirical performance. Both methods are supertree approaches in that they divide a larger taxon set into subsets, estimate trees on those subsets, and apply a supertree method which assembles a tree on the entire set of taxa from the smaller "source" trees. SuperFine is designed to handle datasets with source tree conflict only due to estimation error, while ASTRAL is designed to handle source tree conflict due to estimation error and incomplete lineage sorting which can cause gene trees to differ from the underlying species tree. I will present supertree methods in a mathematical context, focusing on some theoretical properties of MRP (Matrix Representation with Parsimony), the most popular supertree method, and SuperFine which outperforms MRP. I will also describe a desirable statistical property of ASTRAL and this method's potential to enable highly accurate genome-scale phylogenetic analysis.
Title: Algebraic cycles and degeneration
Seminar: Algebra
Speaker: Jaya Iyer of IMSC Chennai
Contact: David Zureick-Brown, dzb@mathcs.emory.edu
Date: 2014-03-25 at 5:00PM
Venue: W302
Download Flyer
Abstract:
We will introduce some questions on the theory of algebraic cycles, and later discuss degenerations of certain one-cycles on jacobian of a curve and on triple product of a curve. These correspond to elements in higher Chow groups.
Title: Multi-Structured Inference in Text-to-Text Generation
Seminar: Computer Science
Speaker: Kapil Thadani of Columbia University
Contact: Vaidy Sunderam, vss@emory.edu
Date: 2014-03-21 at 3:00PM
Venue: W201
Download Flyer
Abstract:
Automated personal assistants and summarization tools are increasingly prevalent in the modern age of mobile computing but their limitations highlight the longstanding challenges of natural language generation. Focused text-to-text generation problems present an opportunity to work toward general-purpose statistical models for text generation without strong assumptions on a domain or semantic representation. In this talk, I will present recent work on a supervised sentence compression task in which a compact integer linear programming formulation is used to simultaneously recover heterogenous structures which specify an output sentence. This inference strategy avoids cyclic and disconnected structures through commodity flow networks, generalizing over several recent techniques and yielding significant performance gains on standard evaluation corpora. I will then discuss a number of extensions to this multi-structured generation approach. One line of research explores approximation strategies using Lagrangian relaxation, dynamic programming and linear programming in order to speed up inference while preserving performance. Other extensions exploit the flexibility of the formulation and extend it with minimal additions to new problems such as the more challenging task of merging sentences, as well as to new structures including directed acyclic graphs that represent frame semantics. Finally, I will briefly discuss our use of multi-structured inference in other natural language applications such as summarization and alignment.
Title: A Brief History of Ramsey Theory
Seminar: General Colloquium
Speaker: Steven La Fleur of Emory University
Contact: Steve Batterson, sb@mathcs.emory.edu
Date: 2014-03-21 at 4:00PM
Venue: W303
Download Flyer
Abstract:
Ramsey's theorem asserts that within large enough systems, complete disorder is impossible. This talk will be focused around the history of Ramsey theory, as well as Ramsey himself. We will discuss some of the motivating questions in the early 20th century leading up to the pivotal result by F.P. Ramsey in 1927, as well as the effect that this result has had on mathematics, and specifically combinatorics, since then. We will look at some of the variations of the initial problem that have been considered throughout the last century and partially assess the current state of affairs for these topics. Part of the talk will also investigate F.P. Ramsey himself, where we will briefly mention some of his other contributions to subjects such as mathematical logic (which was his interest when he stated his now famous theorem), as well as economic theory and probability.
Title: Opinion Mining for the Internet: Models, Algorithms and Predictive Analytics
Seminar: Computer Science
Speaker: Arjun Mukherjee of University of Illinois at Chicago
Contact: Vaidy Sunderam, vss@emory.edu
Date: 2014-03-20 at 4:00PM
Venue: W302
Download Flyer
Abstract:
The massive amounts of user generated content in social media offers new forms of actionable intelligence. Public sentiments in debates, blogs, and news comments are crucial to governmental agencies for passing new bills/policies, gauging social unrest, predicting elections, and socio-economic indices. The goal of my research is to build robust statistical models for opinion mining with applications to marketing, social, and behavioral sciences. To achieve this goal, a number of research challenges need to be addressed. The first challenge is fine-grained information extraction which can capture diverse types of opinions (e.g., agreement/disagreement; contention/controversy, etc.) and various other latent sentiments expressed in social conversations and discussions. The state-of-the-art machinery (e.g., topic modeling) falls short for such a task. I develop several novel knowledge induced sentiment topic models which respect notions of human semantics. The second challenge is that social sentiments are inherently dynamic and change over time. To leverage the sentiments over time for predictive analytics (e.g., predicting financial markets), I develop Bayesian nonparametric topic based sentiment time-series and vector autoregression models. The third challenge is to filter deceptive opinion spam/fraud. It is estimated that 15-20% opinions on the Web are fake. Hence, detecting opinion spam is a precondition for reliable opinion mining. In this talk, I will present novel statistical models for sentiment analysis and talk about two key frameworks: (1) Semi-supervised graphical models for mining fine-grained opinions in social conversations, and (2) Bayesian nonparametrics, sentiment time-series, and vector autoregression models for stock market prediction. In the later part of the talk, I will discuss the problem of opinion spam and throw light on some techniques for filtering opinion spam. The focus will be on modeling collusion and combating group spam in e-Commerce reviews. The talk will conclude with a discussion about my ongoing research and future research vision in opinion contagions, forecasting socio-economic indices, and healthcare.
Title: Optimization with sparse matrix cone constraints
Colloquium: N/A
Speaker: Martin Andersen of Technical University of Denmark
Contact: James Nagy, nagy@mathcs.emory.edu
Date: 2014-03-17 at 1:00PM
Venue: W306
Download Flyer
Abstract:
Optimization problems with sparse matrix cone constraints arise naturally in a wide range of applications, and such problems can often be solved efficiently by carefully utilizing the underlying structure. Two kinds of sparse matrix cones are of particular interest: the cone of symmetric positive semidefinte matrices with a given sparsity pattern and its dual cone, the cone of sparse, positive semidefinite-completable matrices. These cones are very general and include, as special cases, the nonnegative orthant, the quadratic cone, and the cone of positive semidefinite matrices. Using techniques from sparse numerical linear algebra, the structure of the sparse matrix cones can be exploited to construct faster optimization algorithms.. This talk will focus on the usefulness of sparse matrix cone formulations, which will be demonstrated through numerical examples drawn from a variety of problems such as optimal power flow and robust estimation.
Title: Secure and Privacy-Assured Outsourced Cloud Data Services
Seminar: Computer Science
Speaker: Ming Li of Utah State University
Contact: Vaidy Sunderam, vss@emory.edu
Date: 2014-03-17 at 4:00PM
Venue: W303
Download Flyer
Abstract:
Cloud computing is envisioned as the next generation architecture of IT enterprises, which provides convenient remote access to massively scalable data storage and application services. Despite the cloud’s promise for huge potential economical savings, by outsourcing data services to the cloud, users lose physical control over their data while cloud service providers can no longer be trusted to guarantee their data security and privacy. This leads to a paradigm shift in cloud security research in recent years, under which many issues including data confidentiality, access control, integrity protection and utilization need to be revisited. In this talk, I will present our research efforts in data security and privacy in cloud computing, which aim at returning full control over outsourced data to their owners through cryptographic approaches. The first part introduces a scalable and owner-centric secure data sharing scheme, where owners can cryptographically enforce fine-grained data access control on any untrusted server by specifying access policies based on attributes of the data itself and authorized users, which is achieved by adapting a new cryptographic primitive called attribute-based encryption. The second part gives an overview of our other research projects, including secure integrity auditing of shared outsourced data (without physically possessing a copy of the data), and privacy-preserving searches over encrypted cloud data (without letting the cloud learn both the data contents and search keywords). Finally, I will outline future research directions on secure computation outsourcing, big data security and privacy, and secure cyber-physical systems.
Title: Regularization in Tomography - Dealing with Ambiguity and Noisy Data
Seminar: Computational Mathematics
Speaker: Per Christian Hansen of Technical University of Denmark
Contact: James Nagy, nagy@mathcs.emory.edu
Date: 2014-03-13 at 2:00PM
Venue: W306
Download Flyer
Abstract:
Tomographic reconstructions are routinely computed each day. Our reconstruction algorithms are so reliable that we sometimes forget we are actually dealing with inverse problems with inherent stability problems. This is because the algorithms automatically incorporate regularization techniques that, in most cases, handle very well the stability issues.\\ \\ In this talk we take a basic look at the inverse problem of CT reconstruction, in order to understand the stability problems that manifest themselves in solutions that may be very sensitive to data errors and may also fail to be unique. We demonstrate how regularization is used to avoid these problems and make the reconstruction process stable, and how the regularization is incorporated in standard reconstruction algorithms. Moreover, we shall see that different regularization techniques have different impact in the computed reconstructions.