All Seminars

Show:
Title: Reading Between the Lines of Datacenter Logs
Seminar: Computer Science
Speaker: Dr. Nosayba El-Sayed of MIT
Contact: TBA
Date: 2018-02-20 at 1:00PM
Venue: Atwood Chemistry Building Room 240
Download Flyer
Abstract:
Designing datacenters that are reliable, energy-efficient, and capable of delivering high performance and high utilization is a nontrivial problem facing scientists, businesses, and governments alike. In this talk, I will demonstrate how analyzing large datasets from different organizations helped us uncover interesting (and often surprising) patterns in the behavior of systems and applications in these large-scale platforms. I will show how real-world data helped us tackle critical questions such as how does temperature impact server reliability in places like Google, or how well do users configure the computing jobs they submit to shared clusters (spoiler alert: not very well!). Finally, I will demonstrate how simple machine learning techniques can be leveraged to accurately predict job failures in datacenters, while using data that is easily collected in current platforms.
Title: Joint Athens--Atlanta Number Theory Seminar
Seminar: Algebra
Speaker: David Harbater and Jacob Tsimerman of
Contact: David Zureick-Brown, dzb@mathcs.emory.edu
Date: 2018-02-20 at 4:00PM
Venue: TBA
Download Flyer
Abstract:
Talks will be at the University of Georgia \\ \textbf{David Harbater} (University of Pennsylvania), 4:00 \\ Local-global principles for zero-cycles over semi-global fields \\ Classical local-global principles are given over global fields. This talk will discuss such principles over semi-global fields, which are function fields of curves defined over a complete discretely valued field. Paralleling a result that Y. Liang proved over number fields, we prove a local-global principle for zero-cycles on varieties over semi-global fields. This builds on earlier work about local-global principles for rational points. (Joint work with J.-L. Colliot-Thélène, J. Hartmann, D. Krashen, R. Parimala, J. Suresh.) \\ \textbf{Jacob Tsimerman} (U. Toronto), 5:15 \\ Cohen-Lenstra heuristics in the Presence of Roots of Unity \\ The class group is a natural abelian group one can associated to a number field, and it is natural to ask how it varies in families. Cohen and Lenstra famously proposed a model for families of quadratic fields based on random matrices of large rank, and this was later generalized by Cohen-Martinet to general number fields. However, their model was observed by Malle to have issues when the base field contains roots of unity. We explain that in this setting there are naturally defined additional invariants on the class group, and based on this we propose a refined model in the number field setting rooted in random matrix theory. Our conjecture is based on keeping track not only of the underlying group structure, but also certain natural pairings one can define in the presence of roots of unity. Specifically, if the base field contains roots of unity, we keep track of the class group G together with a naturally defined homomorphism $G^*[n] \to G$ from the n-torsion of the Pontryagin dual of G to G. Using methods of Ellenberg-Venkatesh-Westerland, we can prove some of our conjecture in the function field setting.
Title: Brauer classes supporting an involution
Seminar: Algebra
Speaker: Uriya First of University of Haifa
Contact: David Zureick-Brown, dzb@mathcs.emory.edu
Date: 2018-02-13 at 4:00PM
Venue: W304
Download Flyer
Abstract:
The construction of the Brauer group of a field can be generalized to (commutative) rings, and more generally to schemes, by replacing central simple algebras with Azumaya algebras. As in the case of fields, the Brauer group is an important cohomological invariant of the scheme, featuring, for instance, in the Manin obstruction for rational points. \\ Many of the properties of central simple algebras generalize to Azumaya algebras, but sometimes modifications are needed. For example, Albert characterized the central simple algebras admitting an involution of the first kind as those whose Brauer class is 2-torsion. While this fails for Azumaya algebras over a ring R, Saltman showed that the 2-torsion classes in the Brauer group of R are precisely those containing some representative admitting an involution of the first kind. Knus, Parimala and Srinivas later gave a quantitative version of this statement: If A is an Azumaya algebra of over R such that its Brauer class is 2-torsion, then there is an Azumaya algebra in the Brauer class of A that admits an involution and has degree 2*deg(A). \\ In this talk, we shall recall what are Azumaya algebras and how the Brauer group of a ring (or a scheme) is constructed. Then we will present a recent work with Asher Auel and Ben Williams where we use topological obstruction theory to show that the quantitative result of Knus, Parimala and Srinivas cannot be improved in general. Specifically, there are Azumaya algebras of degree 4 whose Brauer class is 2-torsion, but such that any algebra that is Brauer-equivalent to them and admits an involution has degree divisible by 8 = 2*4.
Title: Sparse Linear Algebra in the Exascale Era
Colloquium: Computational Mathematics
Speaker: Erin Carson of Courant Institute of Mathematical Sciences
Contact: James Nagy, jnagy@emory.edu
Date: 2018-02-13 at 4:00PM
Venue: W303
Download Flyer
Abstract:
Sparse linear algebra problems, typically solved using iterative methods, are ubiquitous throughout scientific and data analysis applications and are often the most expensive computations in large-scale codes due to the high cost of data movement. Approaches to improving the performance of iterative methods typically involve modifying or restructuring the algorithm to reduce or hide this cost. Such modifications can, however, result in drastically different behavior in terms of convergence rate and accuracy. A clear, thorough understanding of how inexact computations, due to either finite precision error or intentional approximation, affect numerical behavior is thus imperative in balancing the tradeoffs between accuracy, convergence rate, and performance in practical settings. In this talk, we focus on two general classes of iterative methods for solving linear systems: Krylov subspace methods and iterative refinement. We present bounds on the attainable accuracy and convergence rate in finite precision s-step and pipelined Krylov subspace methods, two popular variants designed for high performance. For s-step methods, we demonstrate that our bounds on attainable accuracy can lead to adaptive approaches that both achieve efficient parallel performance and ensure that a user-specified accuracy is attained. We present two such adaptive approaches, including a residual replacement scheme and a variable s-step technique in which the parameter s is chosen dynamically throughout the iterations. Motivated by the recent trend of multiprecision capabilities in hardware, we present new forward and backward error bounds for a general iterative refinement scheme using three precisions. The analysis suggests that on architectures where half precision is implemented efficiently, it is possible to solve certain linear systems up to twice as fast and to greater accuracy. As we push toward exascale level computing and beyond, designing efficient, accurate algorithms for emerging architectures and applications is of utmost importance. We discuss extensions to machine learning and data analysis applications, the development of numerical autotuning tools, and the broader challenge of understanding what increasingly large problem sizes will mean for finite precision computation both in theory and practice.
Title: When the mesh is important: The role of anisotropic mesh adaptation in numerical modeling, from crack propagation to topology optimization
Colloquium: Computational Mathematics
Speaker: Simona Perotto of Politecnico di Milano, Italy
Contact: James Nagy, jnagy@emory.edu
Date: 2018-02-12 at 4:00PM
Venue: W301
Download Flyer
Abstract:
Anisotropic mesh adaptation has been proved to be a powerful strategy for improving the quality and the efficiency of numerical modeling. Anisotropic phenomena occur in many applications, ranging from shocks in compressible flows, steep boundary or internal layers in viscous flows around bodies, fronts of different nature to be sharply tracked. These problems typically require advanced methods of scientific computing that rely on a tessellation or “mesh” of the region of interest. The intrinsic directionalities of these dynamics call for an accurate control of the shape, the size and the orientation of mesh elements in contrast to standard isotropic meshes where the only parameter to choose is the element size. Metric-based techniques usually drive anisotropic mesh adaptation, the metric being derived by either heuristic or theoretical approaches. In the former case, the metric is identified by a numerical approximation of the Hessian or of the gradient of the discrete solution, coupled with an a priori error estimator. More rigorous - theoretically based - approaches move from a posteriori error analyses, i.e., from an explicit control of the discretization error or – in more sophisticated cases – of a functional of the error. This control is enhanced by an appropriate inclusion of the main directional features of the problem at hand.\\ \\In this presentation, we focus on both heuristic and rigorous anisotropic error estimators. We present some theoretical aspects and applications to a variety of problems relevant for different fields, (i) propagation of cracks in brittle materials, (ii) topology optimization of structures for aerospace engineering (in collaboration with Thales Alenia Space) and (iii) medical image segmentation.
Title: Fast and stable algorithms for large-scale computation
Colloquium: Computational Mathematics
Speaker: Yuanzhe Xi of University of Minnesota
Contact: James Nagy, jnagy@emory.edu
Date: 2018-02-08 at 4:00PM
Venue: W301
Download Flyer
Abstract:
Scientific computing and data analytics have become the third and fourth pillars of scientific discovery. Their success is tightly linked to a rapid increase in the size and complexity of problems and datasets of interest. In this talk, I will discuss our recent efforts in the development of novel numerical algorithms for tackling these challenges. In the first part, I will present a stochastic Lanczos algorithm for estimating the spectrum of Hermitian matrix pencils. The proposed algorithm only accesses the matrices through matrix-vector products and is suitable for large-scale computations. This algorithm is one of the key ingredients in the new breed of “spectrum slicing”-type eigensolvers for electronic structure calculations. In the second part, I will present our newly developed fast structured direct solvers for kernel systems and its applications in accelerating the learning process. By exploiting intrinsic low-rank property associated with the coefficient matrix, these structured solvers could overcome the cubic solution cost and quadratic storage cost of standard dense direct solvers and provide a new framework for performing various matrix operations in linear complexity.
Title: The Riemann Hypothesis
Seminar: Algebra
Speaker: Ken Ono of Emory University
Contact: David Zureick-Brown, dzb@mathcs.emory.edu
Date: 2018-02-06 at 4:00PM
Venue: W304
Download Flyer
Abstract:
TBA
Title: Optimization for scalable graph analytics
Colloquium: Computational Mathematics
Speaker: Kimon Fountoulakis of University of California, Berkeley
Contact: James Nagy, jnagy@emory.edu
Date: 2018-02-05 at 4:00PM
Venue: W301
Download Flyer
Abstract:
Graphs, long popular in computer science and discrete mathematics, have received renewed interest because they provide a useful way to model many types of relational data. In biology, e.g., graphs are routinely used to generate hypotheses for experimental validation; in neuroscience, they are used to study the networks and circuits in the brain; and in social networks, they are used to find common behaviors of users. These modern graph applications require the analysis of large graphs, and this can be computationally expensive. Graph algorithms have been developed to identify and interpret small-scale local structure in large-scale data without the requirement to access all the data. These algorithms have been mainly studied in the field of theoretical computer science in which algorithms are viewed as approximation methods to combinatorial problems.\\ \\In our work, we take a step back and we analyze scalable graph clustering methods from data-driven and variational perspectives. These perspectives offer complementary points of view to the theoretical computer science perspective. In particular, we study implicit regularization properties of certain methods, we solve data-driven issues of existing methods, we explicitly show what optimization problems certain graph clustering procedures are solving, we prove that existing optimization methods have better performance and generalize to unweighted graphs, and finally we demonstrate how state-of-the-art methods can be efficiently parallelized for modern multi-core hardware.
Title: New Era in Distributed Computing with Blockchains and Databases
Seminar: Computer Science
Speaker: Dr. C. Mohan of IBM Fellow and Distinguished Visiting Professor - Tsinghua University
Contact: Li Xiong, lxiong@emory.edu
Date: 2018-02-02 at 3:00PM
Venue: MSC E208
Download Flyer
Abstract:
A new era is emerging in the world of distributed computing with the growing popularity of blockchains (shared, replicated and distributed ledgers) and the associated databases as a way of integrating inter-organizational work. Originally, the concept of a distributed ledger was invented as the underlying technology of the cryptocurrency Bitcoin. But the adoption and further adaptation of it for use in the commercial or permissioned environments is what is of utmost interest to me and hence will be the focus of this keynote. Computer companies like IBM and Microsoft, and many key players in different vertical industry segments have recognized the applicability of blockchains in environments other than cryptocurrencies. IBM did some pioneering work by architecting and implementing Fabric, and then open sourcing it. Now Fabric is being enhanced via the Hyperledger Consortium as part of The Linux Foundation. A few of the other efforts include Enterprise Ethereum, R3 Corda and BigchainDB. While there is no standard in the blockchain space currently, all the ongoing efforts involve some combination of database, transaction, encryption, consensus and other distributed systems technologies. Some of the application areas in which blockchain pilots are being carried out are: smart contracts, supply chain management, know your customer, derivatives processing and provenance management. In this talk, I will survey some of the ongoing blockchain projects with respect to their architectures in general and their approaches to some specific technical areas. I will focus on how the functionality of traditional and modern data stores are being utilized or not utilized in the different blockchain projects. I will also distinguish how traditional distributed database management systems have handled replication and how blockchain systems do it. Since most of the blockchain efforts are still in a nascent state, the time is right for database and other distributed systems researchers and practitioners to get more deeply involved to focus on the numerous open problems.
Title: Computational mathematics meets medicine: Formulations, numerics, and parallel computing
Colloquium: Computational Mathematics
Speaker: Andreas Mang of University of Houston
Contact: James Nagy, jnagy@emory.edu
Date: 2018-02-01 at 4:00PM
Venue: W301
Download Flyer
Abstract:
We will discuss computational methods that integrate imaging data with (bio)physics simulations and optimization in an attempt to aid decision-making in challenging clinical applications. In particular, we will focus on PDE-constrained formulations for diffeomorphic image registration, a classical inverse problem, which seeks to find pointwise correspondences between two or more images of the same scene. In its simplest form, the PDE constraints are the transport equations for the image intensities. We will augment these equations with a model of brain cancer progression to enable data assimilation in brain tumor imaging. We will see that our formulation yields strongly coupled, nonlinear, multiphysics systems that are challenging to solve in an efficient way. We will discuss the formulation, discretization, numerical solution, and the deployment of our methods in high-performance computing platforms. Our code is implemented in C/C++ and uses the message passing interface (MPI) library for parallelism.\\ \\We will showcase results for clinically relevant problems, study numerical accuracy, rate of convergence, time-to-solution, inversion quality, and scalability of our solver. We will see that we can solve clinically relevant problems (50 million unknowns) in less than two minutes on a standard workstation. If we use 512 MPI tasks we can reduce the runtime to under 2 seconds, paving the way to tackle real-time applications. We will also showcase results for the solution of registration problems of unprecedented scale, with up to 200 billion unknowns.