## 2015-2016 CSCE 681 Abstracts

### MANDATORY FOR NEW GRAD STUDENTS (but not other CSCE 681 students)

4:10-6:00 p.m., Monday August 31, 2015
Room 124, Bright Building

Abstract

This meeting will concentrate on the essentials the students will need to settle in. It will include an introduction to departmental administration (staff who's who, payroll, mailboxes, phones), the computing resources (computer use/accounts, printer quotas, lab access/tours), the academic advising staff and resources, the TAMU honor system, TAMU Libraries, Graduate Teaching Academy, Student Engineers' Council and relevant student organizations (CSEGSA, AWICS, TACS (TAMU ACM and IEEE student chapter), UPE and TAGD).

### MANDATORY FOR NEW GRAD STUDENTS and counts towards requirement for CSCE 681 students.

4:10-6:00 p.m., Wednesday September 2, 2015
Room 124, Bright Building

Abstract

• 4:10-5:10 p.m. - Presentation
• 5:10-6:00 p.m. - Pizza & Current Student Poster Session - new students can meet current grads and learn about ongoing research projects.

### BetrFS: Write-Optimization in a Kernel File System

Donald E. Porter
Assistant Professor
Kieburtz Young Scholar in Computer Science
Computer Science Department
Stony Brook University

4:10pm Monday, September 14, 2015
Room 124 HRBB

Abstract

Write-optimized data structures (WODS) are a promising building block for storage systems because they have the potential to strictly dominate the performance of B-trees and other common on-disk data structures. In particular, WODS can dramatically improve performance of both small, random writes and large, sequential scans.

This talk will introduce the basics of WODS, including the B^\epsilon tree, and then will describe BetrFS, the first in-kernel file system built with a WODS. This work contributes a combination of kernel-level techniques to leverage write-optimization in the VFS layer and data structure-level enhancements to meet the requirements of a POSIX-style file system. BetrFS outperforms widely-used file systems, such as ext4 and xfs, on many benchmarks, sometimes by orders of magnitude.

The BetrFS project is an ongoing effort by an increasingly large team of contributors from Stony Brook, Rutgers, MIT, and Tokutek/Percona. More information, including source code, is available at betrfs.org.

Biography

Don Porter is an Assistant Professor and Kieburtz Young Scholar of Computer Science at Stony Brook University. Porter's research interests broadly involve improving efficiency and security of computer systems. Porter earned a Ph.D. and M.S. from The University of Texas at Austin, and a B.A. from Hendrix College. He has received awards including the NSF CAREER Award and the Bert Kay Outstanding Dissertation Award from UT Austin.

Faculty Contact: Dr. Dilma Da Silva

### Multi-Sensor Data-Driven Synchronization Using Wearable Sensors

Roozbeh Jafari
Associate Professor
Biomedical Engineering, Computer Science and Engineering, Electrical and Computer Engineering
Texas A&M University

4:10pm Wednesday, September 16, 2015
Room 124 HRBB

Abstract

Wearable computers bring to fruition many new opportunities to continuously monitor human body; whether they are intended to detect an early onset of a disease, assess human performance, determine the effectiveness of a treatment or enhance user’s productivity.

In this talk, we initially provide an overview of our activities related to wearable computer design and development. The overview will help students to identify collaboration opportunities with Embedded Signal Processing laboratory. We will then proceed to report on our investigation of clock synchronization for Internet of Things (IoT) leveraging wearable computers.

Our approach exploits common physical events observed by the sensors as they interact. This allows for synchronization of low-power embedded systems in heterogeneous sensor networks by using the events in the physical world to drive the synchronization in the cyber world. Using the events, a sensor-accuracy delay model, and a graph model, we detect physical and cyber couplings between the sensor data streams and determine which couplings will minimize the overall clock drift and time keeping discrepancies. We present a graph model to represent the event couplings between sensors and the drift in the sensor timing.

We propose a solution that employs a shortest path algorithm, selects the best set of physical and cyber couplings to update the local time of the sensors, and minimizes the overall clock drift in the system based on the graph model. Our results for two experiments show an improvement of 21.5% and 43.7% for total drift and 59.4% and 60.7% for average drift.

Biography

Roozbeh Jafari is an associate professor in Biomedical Engineering, Computer Science and Engineering and Electrical and Computer Engineering at Texas A&M University. He received his PhD in Computer Science from UCLA and completed a postdoctoral fellowship at UC-Berkeley. His research interest lies in the area of wearable computer design and signal processing. His research has been funded by the NSF, NIH, DoD (TATRC), AFRL, AFOSR, DARPA, SRC and industry (Texas Instruments, Tektronix, Samsung & Telecom Italia). He has published over 120 papers in refereed journals and conferences. He has served as the general chair and technical program committee chair for several flagship conferences in the area of Wearable Computers.

He is the recipient of the NSF CAREER award in 2012, IEEE Real-Time & Embedded Technology & Applications Symposium (RTAS) best paper award in 2011 and Andrew P. Sage best transactions paper award from IEEE Systems, Man and Cybernetics Society in 2014. He is an associate editor for the IEEE Sensors Journal, IEEE Internet of Things Journal and IEEE Journal of Biomedical and Health Informatics.

Students who are interested in potential collaboration opportunities with ESP lab are encouraged to visit: http://jafari.tamu.edu/prospective-applicants/

Faculty Contact: Dr. Lawrence Rauchwerger

### Interactive Visual Information Spaces: Using Spatial Distribution to Amplify Cognition

Eric Ragan
Assistant Professor
Department of Visualization
College of Architecture
Texas A&M University

4:10pm Wednesday, September 23, 2015
Room 124 HRBB

Abstract

Many visual applications, such as visual analytics tools and educational games, use spatially distributed information as a means of supporting data exploration and improving understanding. However, designing interactive spatial distributions can be challenging, especially when dealing with large data sets, abstract information, and multiple display options. As a result, it is often unclear how to effectively design spatial distributions for learning and sensemaking. My research addresses this problem through controlled experimentation. In this talk, I will discuss several projects that evaluate task performance and information processing strategies, with specific examples involving scientific data exploration and analytic provenance of intelligence analysis. Overall, the results suggest that supplemental spatial information can affect mental strategies and support performance improvements for memory and understanding, but the effectiveness of spatial distribution is dependent on the nature of the task and a meaningful use of space. I will close by connecting the discussion of spatial distributions to the study of educational and training systems in 3D virtual and augmented reality environments.

Biography

Dr. Eric Ragan is an Assistant Professor in the Department of Visualization at Texas A&M University. His research interests include human-computer interaction, information visualization, virtual reality, evaluation methodology, educational software, and training systems. He previously worked as a visual analytics research scientist at Oak Ridge National Laboratory, where he studied visualization designs that enable monitoring and analysis of streaming data. Current research topics include the visualization of analytic provenance and the design and evaluation of natural interaction techniques in immersive virtual environments. Dr. Ragan received his Ph.D. in computer science from Virginia Tech. Contact him at eragan@tamu.edu.

Faculty Contact: Dr. Nancy M. Amato

### From Highly Efficient Formal Verification in Selected Areas to Success in an SOC-IP World

Flemming Andersen
Intel Corporation

4:10pm Monday, September 28, 2015
Room 124 HRBB

Abstract

Over the past twenty years the hardware industry gained a lot of experience with practical application of formal methods- and verification. From originally being able to formally verify only RTL components with hundreds of gates, the tools today supports millions of gates. In addition, experience indicates that a combination of functional equivalence verification techniques combined with assertion based temporal logic verification of control logic is required to efficiently handle the size, functionality, and complexity of design components today.

The most successful application areas have naturally been those like arithmetic, decoders/encoders, cache coherence protocols, RAS/ECC, and other areas where solid standards such as the IEEE 754 floating point standard rigorously define what the formal specifications should be. Areas where no standards are given, keep on being a challenge for formal verification since the investment in specifications are often more or less useless when the next product is being developed. Hence, the introduction of IP-components and sub-systems with standardized interfaces- and protocols to develop SOC products enables much stronger use of formal methods- and verification if used correctly. In this presentation we will look at different successful formal verification areas and discuss how these can be used as a model- and method for success in an SOC-IP world.

Biography

Flemming Andersen is a Principal Engineer and formal verification manager at Intel. He joined the company in 2000 where he hired, developed, and managed the formal verification team at the Texas Austin site. In 2005 he was offered the opportunity to work at the Intel Research Labs in the Trusted Platform Lab (TPL) in Oregon to hire, manage, and develop a team investigating formal methods for security. When TPL was closed end of 2006 as part of a cross company cost cutting, Flemming started to work in the Server Development Group that implements the Many Integrated Core (SDG/MIC) server processors known as the Xeon-Phi processors that are used in Intel processor based super computers. One of these is the Milky Way Supercomputer which is currently the fastest supercomputer in the world.

In SDG/MIC Flemming owns and manages the formal verification (FV) of the RTL in the Xeon-Phi processors. The main focus is on arithmetic verification since we never want to encounter a new FDIV bug like the one that cost Intel almost $500 million in 1994. But the FV-team also verifies the correct functionality of RAS/ECC and other critical areas that benefits such as modeling cache coherence protocols. The last several years at Intel he has been working on developing formal methods to bridge the gap between formal verification and simulation based validation techniques. Before coming to USA, Flemming managed a team of 19 researchers in TeleDanmark R&D where he was responsible for developing new Internet services. His team implemented the Yellow Pages in Denmark and a VOIP solution like an early version of Skype already in 1996. He originally started as a research scientist at TeleDanmark Research (TDR) where he participated in the implementation of compilers and developed formal verification tools for concurrency. During his employment at TDR, he had two years leave of absence working as a guest scientist at the IBM Science Center in Heidelberg/Germany where he did research in databases and helped IBM develop a new hierarchical database query language that contributed to the definition SQL2. Flemming has a PhD-degree in computer science as well as an M.Sc.EE degree from the Technical University of Denmark. His PhD-work on formal verification of concurrent systems using the UNITY theory led to an invitation to work with Professor Mani Chandy at Caltech and later collaboration with Professor Jayadev Misra at the University of Texas at Austin. Flemming has 6 granted US patents and 31 publications of which more than 15 have been presented at conferences. Before coming to USA Flemming served on review committees and conference panels, was EU-reviewer, served on program committees, and is currently a member of IEEE and ACM. Faculty Contact: Dr. Aakash Tyagi ### Detecting and Mitigating Synchronization Errors in Concurrent Data Structures Chao Wang Assistant Professor Department of Electrical and Computer Engineering Virginia Tech 4:10pm Wednesday, September 30, 2015 Room 124 HRBB Abstract Concurrent data structures are the foundation of many high-performance software systems. By providing a cost-effective solution to reducing memory contention and increasing scalability, they have found a wide range of applications from embedded systems to distributed systems. However, implementing concurrent data structures is not an easy task due to the often subtle interactions of concurrent operations and very large number of possible interleavings. In practice, a few hundred lines of highly concurrent C/C++ code can pose severe challenges for testing and debugging. In this talk, I will present a new runtime verification method for detecting violations of standard and quasi linearizability properties in concurrent data structures. Linearizability is the de facto correctness standard for the implementations of concurrent data structures. Quasi linearizability relaxed the standard notion of linearizability to allow more freedom for improving the runtime performance. Our method detects (quasi) linearizability violations by systematically exploring the possible interleavings of a small client program. It can guarantee that all reported violations are real violations. In addition, I will present a method for runtime mitigation of concurrency errors exposed by the client program, which employs a combination of static and dynamic analysis to control the execution order of current operations to suppress illegal operation sequences. Both methods have been implemented in a software tool built upon the Clang/LLVM platform and evaluated in multithreaded C/C++ applications. Biography Chao Wang is an Assistant Professor of ECE and CS (by courtesy) at Virginia Tech. He received a B.S. degree (1996) and a M.S. degree (1999) from Peking University, China and a Ph.D. degree (2004) from University of Colorado at Boulder. From 2004 to 2011, he was a Research Staff Member at NEC Laboratories of America, Inc. in Princeton, New Jersey. His research interests are concurrency, formal verification, program analysis, and program synthesis. He has published a book and more than 70 conference and journal papers, many of which appeared in flagship venues of his field. He received NSF CAREER award in 2012 and ONR Young Investigator award in 2013. He was named Outstanding New Assistant Professor by Virginia Tech College of Engineering in 2013. He also received the FMCAD Best Paper award in 2013, the ACM SIGSOFT Distinguished Paper award (FSE) in 2010, the Best Paper of the Year award from ACM Transactions on Design Automation of Electronic Systems in 2008, the NEC Labs Technology Commercialization Award in 2006, and the ACM SIGDA Outstanding Ph.D. Dissertation award in 2004. Faculty Contact: Dr. Jeff Huang ### Shipping Large Scale Projects — an Insider's View Terry Leeper CTO and Technical Director for Amazon Business Amazon.com, Inc. 4:10pm Wednesday, October 7, 2015 Room 124 HRBB Abstract With the recent surge in popularity of software development methodologies such as Agile, SCRUM, and Kancan, software development has taken steps forward in balancing user requirements and demands with developer productivity, reliability, and clarity. However, these techniques run into scaling, tracking, and consistency issues when faced with a larger project with many teams, technologies, feature areas, and user groups. This is further complicated with software that must be packaged and reliable such as an operating system, and database provider, or developer tools. This talks will take a peek behind the development scenes of two of the most successful software companies in history, Amazon and Microsoft. We will discuss team strategies, product definition, feature definition, technology choices, and product release. We will also compare and contrast different philosophies and prioritization of different products. Biography Terry Leeper is the CTO and Technical Director for Amazon Business, the software that allows businesses and business customers to purchase on Amazon.com. Amazon Business enjoyed a very successful initial launch in April 2015 with several feature releases since then. Prior to Amazon Business, Terry owned all the vendor facing software tools, the retail workflow engine, and the software behind MYHABIT.COM and BUYVIP.COM. Terry has been with Amazon over 4 years and is the site leader for Amazon's Austin Development Office. Prior to Amazon, Terry held several roles at Microsoft where he mainly worked in Visual Studio owning the C++ tool chains including compilers, linkers, JITs, debuggers, STL and C-runtime libraries, and the C++ debugger. Terry also led developer tools sales in Europe and founded the Shanghai Development Center for Visual Studio. Terry now lives in Austin, TX and is part of the Texas A&M Computer Science and Engineering Advisory Council. Terry owns degrees in Computer Science and Electrical Engineering from Texas A&M University. Faculty Contact: Dr. Lawrence Rauchwerger ### Autotuning Compiler and Library Technology for Sparse Matrix Computations Mary W. Hall Professor School Of Computing University of Utah 4:10pm Monday, October 12, 2015 Room 124 HRBB Abstract Computations on sparse matrices and graphs have moved to the forefront of scientific computing and data analytics. Implementations of sparse computations attempt to reduce data storage and computation requirements (e.g., by storing only nonzero data elements) through the use of indirection matrices. The presence of indirection matrices such as B in A[B[i]] challenge most parallelizing compiler technology, as any optimizations that must understand the memory access patterns for A require the values of B, which are not known until program execution time. Further, optimization of the sparse representations must have run-time knowledge of the nonzero structure. This talk will describe how autotuning can be used to solve both problems. Autotuning empirically evaluates a search space of possible implementations of a computation to identify the implementation that best meets its optimization criteria (e.g., performance, power, or both). We describe extensions to the CHiLL autotuning compiler framework to support optimizations of sparse computations. We also describe the Nitro code variant tuning framework, a C++ library for expressing computations for which optimization strategies depend on the input data. Biography Mary Hall is a professor in the School of Computing at University of Utah. She received a PhD in Computer Science in 1991 from Rice University. Her research focuses on compiler technology for exploiting performance-enhancing features of a variety of computer architectures, with a recent emphasis on compiler-based performance tuning technology targeting many-core graphics processors and multi-core nodes in supercomputers. Hall's prior work has developed compiler techniques for exploiting parallelism and locality on a diversity of architectures: automatic parallelization for SMPs, superword-level parallelism, processing-in-memory architectures and FPGAs. Professor Hall is an ACM Distinguished Scientist and ACM’s representative on the Computing Research Association Board of Directors. She is deeply interested in computing history, having served on the ACM History Committee since 2005 and as chair from 2009-2014. She also actively participates in outreach programs to encourage the participation of women and underrepresented minorities in computer science. Faculty Contact: Dr. Lawrence Rauchwerger ### Town Hall for Undergraduate Students 4:10pm Monday, October 19, 2015 Room 124 HRBB Abstract Town Hall meeting for undergraduate students to meet with Department Head Dr. Dilma Da Silva and to provide their input to her and to the officers of the student organizations (AWICS, TACS, TAGD, TAMUHack, UPE). Please come and bring your opinions on things that are working in the department and things that you would like to see get more attention. Faculty Contact: Dr. Dilma Da Silva ### Building Effective and Practical Information Extraction Models Ruihong Huang Assistant Professor Department of Computer Science and Engineering Texas A&M University 4:10pm Wednesday, October 21, 2015 Room 124 HRBB Abstract Information extraction aims to extract structured information from unstructured free texts. Extracted information is widely used in many natural language processing applications such as question and answering, text summarization and machine translation. My research has focused on building effective and practical information extraction models by investigating linguistic phenomena and exploring weakly supervised machine learning techniques. In this talk, I will present several information extraction models that have advanced the state-of-the-art and are easily applicable in practice. In addition, I will briefly present research that is on-going in my group. Biography Ruihong Huang is an assistant professor in computer science and engineering at Texas A&M University. She received her Ph.D. from the School of Computing at the University of Utah and completed a postdoc in the Natural Language Processing (NLP) Group at Stanford University. Her main research interests include natural language processing, text understanding and machine learning, with focus on information extraction, sematics, discourse and weakly supervised learning. Faculty Contacts: Dr. Nancy M. Amato ### Town Hall Meeting for Ph.D. Students 4:10pm Wednesday, October 28, 2015 Room 124 HRBB Abstract Town Hall meeting for Ph.D. students to meet with Department Head Dr. Dilma Da Silva and Dr. Hank Walker, Graduate Advisor. Come talk in an open and casual forum on how CSE can serve you better; what's working or what needs attention; how to get more involved. Faculty Contact: Dr. Dilma Da Silva ### Town Hall Meeting for M.S. Students 4:10pm Monday, October 30, 2015 Room 124 HRBB Abstract Town Hall meeting for M.S. students to meet with Department Head Dr. Dilma Da Silva and Dr. Hank Walker, Graduate Advisor. Come talk in an open and casual forum on how CSE can serve you better; what's working or what needs attention; how to get more involved. Faculty Contact: Dr. Dilma Da Silva ### Mining Big Data: An Application on Social Spammer Detection Xia (Ben) Hu Assistant Professor Department of Computer Science and Engineering Texas A&M University 4:10pm Wednesday, November 4, 2015 Room 124 HRBB Abstract It's the age of big data! We face great challenges to harness big data, make sense of the data, and turn data into knowledge. This talk will start from introducing what big data is, what data mining is, why we have to mine data now, and how. With the growing popularity of social media, social spamming has become rampant on all platforms. Many (fake) accounts, known as social spammers, are employed to overwhelm legitimate users with unwanted information. Social spammers are unique due to their coordinated efforts to launch attacks such as distributing ads to generate sales, disseminating pornography and viruses, executing phishing attacks, or simply sabotaging a system's reputation. In this talk, I will also introduce a novel and systematic analysis of social spammers from a data mining perspective to tackle the challenges raised by social media data for spammer detection. Specifically, I will formally define the problem of social spammer detection and discuss the unique properties of social media data that make this problem challenging. By analyzing the two most important types of information, network and content information, I will introduce a unified framework by collectively using heterogeneous information in social media. To tackle the labeling bottleneck in social media, I will show how we can take advantage of the existing information about spam in email, SMS, and on the web for spammer detection in microblogging. I will also present a solution for efficient online processing to handle fast-evolving social spammers Biography Xia “Ben” Hu is currently an assistant professor at the Department of Computer Science and Engineering, Texas A&M University. He obtained his Ph.D. in Computer Science and Engineering from Arizona State University, and M.S. and B.S. in Computer Science from Beihang University, China. His research interests are in data mining and machine learning with their applications on social informatics and health informatics. As a result of his research work, he regularly publish papers in several major academic venues, including WWW, SIGIR, KDD, WSDM, IJCAI, AAAI, CIKM, SDM, etc. One of his papers was selected in the Best Paper Shortlist in WSDM'13. He is the recipient of the 2014 ASU’s President’s Award for Innovation, Faculty Emeriti Fellowship, and IEEE Atluri Award. He has served on program committees for several major conferences such as KDD, WWW, IJCAI, WSDM and SDM, and reviewed for multiple journals, including IEEE TKDE, TKDD, and ACM TOIS. His research attracts a wide range of external government and industry sponsors, including NSF, ONR, AFOSR, Yahoo!, and Microsoft. Updated information can be found at http://faculty.cs.tamu.edu/xiahu/ Faculty Contact: Dr. Nancy M. Amato ### Natural Language Semantics by Combining Logical and Distributional Methods using Probabilistic Logic Raymond J. Mooney Professor of Computer Science The University of Texas at Austin Director of the UT Artificial Intelligence Laboratory 4:10pm Monday, November 9, 2015 Room 124 HRBB Abstract Traditional logical approaches to semantics and newer distributional or vector space approaches have complementary strengths and weaknesses.We have developed methods that integrate logical and distributional models by using a CCG-based parser to produce a detailed logical form for each sentence, and combining the result with soft inference rules derived from distributional semantics that connect the meanings of their component words and phrases. For recognizing textual entailment (RTE) we use Markov Logic Networks (MLNs) to combine these representations, and for Semantic Textual Similarity (STS) we use Probabilistic Soft Logic (PSL). We present experimental results on standard benchmark datasets for these problems and emphasize the advantages of combining logical structure of sentences with statistical knowledge mined from large corpora. Biography Raymond J. Mooney is a Professor in the Department of Computer Science at the University of Texas at Austin. He received his Ph.D. in 1988 from the University of Illinois at Urbana/Champaign. He is an author of over 150 published research papers, primarily in the areas of machine learning and natural language processing. He was the President of the International Machine Learning Society from 2008-2011, program co-chair for AAAI 2006, general chair for HLT-EMNLP 2005, and co-chair for ICML 1990. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the Association for Computational Linguistics and the recipient of best paper awards from AAAI-96, KDD-04, ICML-05 and ACL-07. Faculty Contacts: Dr. Ruihong Huang ### Computational Geometry Aspects of Monte Carlo Approaches to PDE Problems in Biology, Chemistry, and Materials Michael Mascagni Professor of Computer Science Florida State University 4:10pm Wednesday, November 11, 2015 Room 124 HRBB Abstract We will introduce some Monte Carlo methods for solving problems in electrostatics. These rely on evaluating functionals of the first-passage time of Brownian motion on geometries defined by the system of interest. We will use the Walk on Spheres (WOS) algorithm to quickly evaluate these functionals, and we will show how a computational geometric computation dominates this computation in complexity. We then consider computing the capacitance of a complicated shape, and use this as our model problem to find an efficient serial and parallel implementation. The capacitance computation is prototypical of many Monte Carlo approaches to problems in biology, biochemistry, and materials science. This is joint work with Drs. Walid Keyrouz and Derek Juba from the Information Technology Laboratory at NIST. Biography Michael Mascagni is an internationally recognized expert on all aspects of random number generation and Monte Carlo methods, and has lectured extensively across the globe on theses and related topics. He received his undergraduate degrees in Biomedical Engineering and Mathematics at the University of Iowa in 1981, and entered Rockefeller University to study neurobiology. While taking some math courses at NYU he decided to switch to math, and he moved to the Courant Institute in 1983. He graduated in 1987, having worked with Prof. Charlie Peskin on the numerical solution of nerve equations. He has published over 100 scholarly articles, has graduated doctoral students in computer science, mathematics, and scientific computing, and he currently leads a research group working in high-performance computing aspects of Monte Carlo methods and random number generation. He is an editor for many journals including "Monte Carlo Methods and Applications," "Mathematics and Computers in Simulation," and the "ACM Transactions on Mathematical Software." He has been a visiting faculty member at Université de Toulon et du Var, Universität Salzburg, Universität Kaiserslautern, Università degli Studi di Padova, and the King Abdullah University of Science and Technology. He also spent a sabbatical year visiting the Seminar für Angewandte Mathematik, Department Mathematik, Eidgenössische Technische Hochschule (ETH-Zürich). He was elected an ACM Distinguished Scientist in 2011, and is currently a Faculty Appointee at the National Institute of Standards and Technology (NIST). Faculty Contacts: Dr. Tim Davis ### [My] 53 Years of Computing History Walter C. Daugherity Senior Lecturer Department of Computer Science and Engineering Texas A&M University 4:10pm Wednesday, November 18, 2015 Room 124 HRBB Abstract From paper tape and punched cards to mag tape to 64 gigabyte flash drives, from FORTRAN II to Java to C++17, from GOTO and 3-way IF statements to structured programming to object-oriented programming to quantum computing, from room-sized mainframes to single-chip computers, Dr. Walter C. Daugherity has lived and worked through 53 years (and counting) of computing history. This personal tour includes his contributions to the PC keyboard standard, integrated user training, and fuzzy logic, and the independent development of a null modem and a multitasking real-time system. Biography After earning a bachelor’s degree in mathematics from Oklahoma Christian University at the age of 19, Walter C. Daugherity received a National Science Foundation Prize Fellowship to attend Harvard University, where he completed his master’s and doctor’s degrees. He then co-founded a computer consulting company whose clients included IBM Federal Systems Division, the New York Times, and the U. S. Department of the Treasury. Among other honors and awards he received the Bowdoin Prize from Harvard and the Outstanding Regional Intercollegiate Programming Contest Director Award from the Association for Computing Machinery. This fall he was named Oklahoma Christian’s Mathematics and Computer Science Alumnus of the Year. During his 28 years at Texas A&M he qualified for American Mensa and received the Outstanding Graduate Faculty Award from the Graduate Student Council and the Undergraduate Faculty Award from the Department of Computer Science and Engineering. In 2013-2014 the faculty elected him speaker of the Faculty Senate. Faculty Contact: Dr. Lawrence Rauchwerger ### Bringing Cloud Services Closer to Mobile and IoT Devices Dilma Da Silva Department Head Professor and Holder of the Ford Motor Company Design Professorship II Department of Computer Science and Engineering Texas A&M University 4:10pm Monday, November 23, 2015 Room 124 HRBB Abstract Cloud computing has been around for many years. There is some level of consensus on how to employ cloud computing services to benefit a variety of workloads, from enterprise and web 2.0 applications to big data analytics and even HPC. For two other relevant domains, namely mobile apps and Internet of Things (IoT), cloud-centric approaches are assumed to be the right way to go. In this talk, I discuss the opportunities and challenges in leveraging cloud computing technology for advancing these two domains. I also present a proposal for a new cloud service that may enable more efficiency and flexibility in how mobile and IoT apps interact with the cloud. Biography Dilma Da Silva joined the Department of Computer Science and Engineering at Texas A&M University as its new department head on August 2014. Her primary research interests are cloud computing, operating systems, distributed computing, and high-end computing. Prior to joining Texas A&M, she worked at Qualcomm Research in California (2012-2014), IBM Thomas J. Watson Research Center in New York (2000-2012) and the University of Sao Paulo in Brazil (1996-2000). Dilma is an ACM Distinguished Scientist, a member of the board of CRA-W (Computer Research Association's Committee on the Status of Women in Computing Research), member of CDC (Coalition for Diversifying Computing), co-founder of the Latinas in Computing group, and an event liaison with USENIX. She served as an officer at ACM SIGOPS from 2011 to 2015. She currently chairs the ACM Senior Award Committee. She has published more than 80 technical papers and filed 15 patents. Dilma received her doctoral degree in computer science from Georgia Tech in 1997 and her bachelor¹s and master¹s degrees from the University of São Paulo. ### Detecting and Fixing Performance Bugs using Executions and Code Patterns Adrian Nistor Assistant Professor Schmid College of Science and Technology Chapman University 4:10pm Wednesday, December 2, 2015 Room 124 HRBB Abstract Software bugs and ineffective testing cost the US economy tens of billions of dollars each year. Performance bugs are programming mistakes that slow down program execution. Performance bugs affect the user-perceived software quality, degrade application responsiveness, and lower system throughput. In addition to impacting everyday software usage, performance bugs have also created high profile incidents, e.g., brought down the Wikipedia and Facebook servers. In this talk I will present my recent work on understanding, detecting, and fixing performance bugs. I will first discuss Caramel, a static analysis technique that detects and fixes performance bugs that have non-intrusive fixes. I will then discuss Toddler, a dynamic analysis technique that detects a different class of performance bugs than Caramel. The idea in Caramel and Toddler is to identify code and execution patterns that are indicative of common programming mistakes affecting performance. I will also briefly present several other of my projects on performance, concurrency, and mobile bugs. Caramel and Toddler found over 190 new performance bugs in widely used Java (Ant, Lucene, Google Core Libraries, Groovy, Tomcat, etc) and C/C++ applications (GCC, Google Chrome, Mozilla, MySQL). 140 of these bugs have already been fixed by developers based on our reports. Biography Adrian Nistor got his Ph.D. in 2014 from the Computer Science Department at the University of Illinois at Urbana-Champaign. He is currently an Assistant Professor at Chapman University. Adrian's research interests are in software engineering, with a focus on detecting, repairing, and preventing bugs in real-world applications. His projects investigate performance, concurrency, and mobile bugs. Adrian's Caramel paper won an ACM SIGSOFT Distinguished Paper award at ICSE 2015. One of the largest telecommunications companies in the world is exploring a technology transfer for Caramel. Adrian received an NSF SHF:Small grant to investigate performance bugs that have non-intrusive fixes. Adrian is a committer to Apache Collections. Faculty Contact: Dr. Jeff Huang ### Spring 2016 Abstracts ### Programming by Tutoring: Advancing the Science of Learning with Teachable Agents Noboru Matsuda Associate Professor Cyber STEM Education Texas A&M University 4:10pm Wednesday, January 20, 2016 Room 124 HRBB Abstract VanLehn, Ohlsson, and Nason (1994) conjectured that simulated students have great potentials for teachers, students, and instructional developers. Twenty years after their seminal work, we have built SimStudent, a computational model of learning that has been shown to be beneficial to students and researchers. SimStudent inductively learns procedural rules to solve problems from examples and through tutored-problem solving. In this talk, I first introduce SimStudent—the cutting-edge machine-learning agent designed to advance theory of educational technology and learning. I demonstrate SimStudent’s ability to make a contribution to the sciences of learning in three research areas: (1) intelligent authoring where SimStudent allows authors to create a cognitive tutor by tutoring SimStudent, (2) student modeling where SimStudent allows researchers to run simulations to understand how students learn (and fail to learn), and (3) learning by teaching where SimStudent functions as a teachable agent that helps students learn by teaching. I will then discuss future directions of SimStudent (broader applications and research opportunities) and simulated learners in general. Biography Dr. Noboru Matsuda is an Associate Professor of Cyber STEM Education in the Department of Teaching, Learning, and Culture faculty at Texas A&M University. Noboru’s research interests include applications of cutting-edge technologies to enhance learning as well as to advance cognitive theories in the sciences of learning with a particular focus on STEM education. Noboru received an MS in Math Education from Tokyo Gakugei University (Tokyo, Japan) and a Ph.D in Intelligent Systems from the University of Pittsburgh. Noboru has been leading the SimStudent project (www.SimStudent.org) where the research team develops an artificial intelligence that can learn problem-solving skills through guided-problem solving (aka peer tutoring). Noboru initially started the SimStudent project when he joined Carnegie Mellon University in 2004 for his postdoctoral training. Since then, he has expanded the project into multiple applications including intelligent authoring, learning simulation, and teachable agent. In recent years, Noboru has launched a new research project on learning engineering where he and his colleagues study how to efficiently build adaptive online courses and how to develop research on the sciences of learning using the adaptive online courses. Faculty Contacts: Dr. Yoonsuck Choe ### Automatic System Anomaly Prediction and Diagnosis using Unsupervised Machine Learning Xiaohui (Helen) Gu Associate Professor Department of Computer Science North Carolina State University 4:10pm Wednesday, January 27, 2016 Room 124 HRBB Abstract Distributed computing infrastructures have become the fundamental platforms for many production systems. However, due to their inherent complexity and sharing nature, those computing infrastructures are prone to various system anomalies such as performance degradation, software hang, and unexpected system halt. In this talk, I will present a set of automatic system anomaly prediction and diagnosis techniques using unsupervised machine learning techniques. Our techniques can raise advance alerts before an anomaly affects the system and provide important clues on why an anomaly occurs. By analyzing system call traces using frequent episode mining, our technique can localize the root cause related application functions for both compiled and interpreted programs (e.g., C++, Java) without requiring any application instrumentation. We have tested our techniques over a set of popular open source sever applications such as Cassandra, Hadoop, Apache, and MySQL. Our results show that we can achieve high prediction and diagnosis accuracy with less than 3% runtime overhead. Biography Xiaohui (Helen) Gu is an associate professor in the Department of Computer Science at the North Carolina State University. She received her PhD degree in 2004 and MS degree in 2001 from the Department of Computer Science, University of Illinois at Urbana-Champaign. She received her BS degree in computer science from Peking University, Beijing, China in 1999. She was a research staff member at IBM T. J. Watson Research Center, Hawthorne, New York, between 2004 and 2007. She received ILLIAC fellowship, David J. Kuck Best Master Thesis Award, and Saburo Muroga Fellowship from University of Illinois at Urbana-Champaign. She also received the IBM Invention Achievement Awards in 2004, 2006, and 2007. Dr. Gu has filed 9 patents, and has published more than 60 research papers in international journals and major peer-reviewed conference proceedings. She is a recipient of NSF Career Award, four IBM Faculty Awards 2008, 2009, 2010, 2011, and two Google Research Awards 2009, 2011, best paper awards from ICDCS 2012, CNSM 2010, and NCSU Faculty Research and Professional Development Award. She served as program co-chair for IEEE/ACM IWQoS 2013 and USENIX ICAC 2014. She is an associate editor for IEEE Transactions for Parallel and Distributed Systems (TPDS). Faculty Contacts: Dr. Dilma Da Silva ### Social Photography, Community, and Human-in-the-loop AI Systems David Ayman Shamma Director of HCI Research Yahoo! Labs and Flickr 4:10pm Wednesday, February 1, 2016 Room 124 HRBB Abstract Today, beyond content and metadata, information is organized by the online social actions taken upon it. These social activities contribute to the overall conversational nature of media that we create, store, and share. From this, there exists many opportunities to build a new class of social-visual systems to aid in the organization and retrieval processes; these opportunities rely heavily on the both tacit and explicit communicative nature of social multimedia. In this talk, I will discuss the new practice of photography and how the media we create have become conversational media objects. Further, I will present a multifaceted human-centered computing system used to surface geo-located weather photos for editorial inclusion in a mobile application. Using the Flickr photosharing service, we can identify explicit group behavior, implicit photo viewing patterns, and apply modern computer vision techniques to surface photos for curatorial editors as a Human-in-the-loop AI system. Finally, I will outline new findings and challenges in social media organization including geographic annotation of photographs and regions, community congregation online, and social engagement. Biography David Ayman Shamma is Director of HCI Research at Yahoo Labs and Flickr. He received his Ph.D. at Northwestern University in 2005 in Computer Science from the Intelligent Information Laboratory. His personal research investigates social multimedia computing and creativity. He currently serves on the steering committees for ACM Multimedia and ACM TVX. In 2013, he was co-chair of the Technical Program at ACM Multimedia and he is a co-general chair for ACM Creativity and Cognition 2017. He is Arts & Digital Culture Co-Editor of SIGMM and Co-Editor of the IEEE Multimedia Special Issue on Social Multimedia and Storytelling. In the past, he was a Visiting Senior Research Fellow at the Keiko-NUS CUTE Center (2014), he was appointed as a Senior Member of the ACM (2012). Before Yahoo!, he was an instructor at the Medill School of Journalism; he has also taught courses in Computer Science and Studio Art. Prior to receiving his Ph.D., he was a visiting research scientist for the Center for Mars Exploration at NASA Ames Research Center. Faculty Contacts: Dr. Andruid Kerne ### Efficient Diameter Approximation for Large Graphs in MapReduce Geppino Pucci Professor of Computer Science Department of Electrical Engineering and Computer Science University of Padova 4:10pm Monday, February 8, 2016 Room 124 HRBB Abstract We present a space and time efficient practical parallel algorithm for approximating the diameter of massive weighted undirected graphs on distributed platforms supporting a MapReduce-like abstraction. The core of the algorithm is a weighted graph decomposition strategy generating disjoint clusters of bounded weighted radius. Theoretically, our algorithm uses linear space and yields a polylogarithmic approximation guarantee; moreover, for important practical classes of graphs, it runs in a number of rounds asymptotically smaller than those required by the natural approximation provided by the state-of-the-art$\Delta\$-stepping SSSP algorithm, which is its only practical linear-space competitor in the aforementioned computational scenario. We complement our theoretical findings with an extensive experimental analysis on large benchmark graphs, which demonstrates that our algorithm attains substantial improvements on a number of key performance indicators with respect to the aforementioned competitor, while featuring a similar approximation ratio (a small constant less than 1.4, as opposed to the polylogarithmic theoretical bound).

JOINT WORK WITH: Matteo Ceccarello, Andrea Pietracaprina, and Eli Upfal.

Biography

Geppino Pucci received the Laurea summa cum laude (1987) and the Ph.D. degree (1993) both in Computer Science from the University of Pisa, Italy. From 1988 to 1990 he was a Research Associate at the Computing Laboratory of the University of Newcastle-upon-Tyne, United Kingdom, where he conducted research in software reliability modelling. From 1990 to 1991 he was a visiting graduate student at the International Computer Science Institute, Berkeley, California, upon invitation of the institute and under the supervision of Professor Richard Karp. In 1992, he joined the Department of Electrical Engineering and Computer Science (DEI) of the University of Padova, Italy, as an Assistant Professor. On leave from DEI, he returned to Berkeley in 1993 as a postdoctoral fellow. In 1996, he spent the Summer Semester at Cornell University, Ithaca, NY as a visiting professor and course instructor. Since October 2001, he has been a Full Professor of Computer Science at DEI.

Pucci has spent several teaching and/or research visits to a number of international research institutions in Asia, Europe and the United States. His research interests lie broadly in the area of algorithm design, with emphasis on High Performance Systems and Data Mining. He has authored or coauthored about ninety papers in the field, which appeared in international journals or refereed conference proceedings. Prof. Pucci has participated (either as a leader or as a key researcher) to several research projects funded by the EU, NATO, MIUR, CNR, NSF-USA and NCSR-UK. He has served on the editorial board of a number of prestigious journals - Parallel Computing, Theory of Computing Systems, Journal of Discrete Algorithms - and in the program committee of the most prestigious conferences in his area - ACM SPAA, IEEE IPDPS (Algorithms Track Vice-Chair 2015), ICALP, and STACS, among others.

Faculty Contacts: Dr. Namcy M. Amato

### Network-Oblivious Algorithms

Andrea A. Pietracaprina
Professor of Computer Science
Department of Electrical Engineering and Computer Science

4:10pm Wednesday, February 10, 2016
Room 124 HRBB

Abstract

A framework is proposed for the design and analysis of Network-Oblivious Algorithms, namely, algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem's input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency.

It is shown that, for a wide class of network-oblivious algorithms, optimality in the latter model implies optimality in the Decomposable BSP model, which is known to effectively describe a wide and significant class of parallel platforms. The proposed framework can be regarded as an attempt to port the notion of obliviousness,well established in the context of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed.

JOINT WORK WITH: Gianfranco Bilardi, Geppino Pucci, Michele Scquizzato, Francesco Silvestri.

Biography

Andrea A. Pietracaprina received the Laurea degree (summa cum laude) in Computer Science (1987) from the University of Pisa, Italy, co-winning best undergraduate thesis awards from IBM-Italia and UNITEAM. He received the M.Sc. (1991) and Ph.D. (1994), both in Computer Science, from the University of Illinois at Urbana-Champaign, IL, USA. Since 1994, he has been with the University of Padova. Since 2002 he is Full Professor at the Department of Information Engineering, where he served as vice chair from 2008 to 2013.

Andrea Pietracaprina's research interests concern: models of computation, algorithms and data structures for parallel and/or hierarchical platforms, data mining and ad-hoc networks. His research results have been published in over 80 papers appeared in international journals, conference proceedings, and collections (including an encyclopedia). He has been Principal Investigator of the project MIUR-PRIN AlgoDEEP (2010-2012), and of projects funded by University of Padova, CNR, and NATO. He has been key researcher in several projects funded by MIUR, CNR, and EU. He has also been reviewer for projects funded by MIUR and EU (5th Framework Programme).

Since 2010 he is member of the editorial board of the Journal of Discrete Algorithms (Elsevier). From 2004 to 2009 he was Associate Editor for IEEE TPDS. In 2008 he was Guest co-Editor of the Special Issue of Theoretical Computer Science 408(2-3). He has been Member of the Program Committee of several international conferences, including: ACM SPAA (2002, 2005, 2008); ICALP (2014); IEEE-IPDPS (2008, 2010, 2012); EURO-PAR Conference (1998, 2004, 2005, 2009, 2014); ACM Computing Frontiers (2010, 2013, 2015); PKDDECML (2015). He has also been involved in the organization of a number of international conferences (ACM SPAA'96, ESA'98, ICALP'06) and schools. Since 1999 he has been Member of the Advisory Board of the EURO-PAR Conference.

Faculty Contacts: Dr. Nancy M. Amato

### Sequoia to Sierra: The LLNL Strategy

Bronis R. de Supinski
Chief Technology Officer
Livermore Computing
Lawrence Livermore National Laboratory

4:10pm Wednesday, February 17, 2016
Room 124 HRBB

Abstract

Lawrence Livermore National Laboratory (LLNL) has a long history of leadership in large-scale computing. Our current platform, Sequoia, is a 96 rack BlueGene/Q system that is currently number three on the Top 500 list. Our next platform, Sierra, will be a heterogeneous system delivered by a partnership between IBM, NVIDIA and Mellanox. While the platforms are diverse, they represent a carefully considered strategy. In this talk, we will compare and contrast these platforms and the applications that run on them, and provide a glimpse into LLNL's strategy beyond Sierra.

Biography

Bronis R. de Supinski is the Chief Technology Officer (CTO) for Livermore Computing (LC) at Lawrence Livermore National Laboratory (LLNL). In this role, he is responsible for formulating LLNL's large-scale computing strategy and overseeing its implementation. His position requires frequent interaction with high performance computing (HPC) leaders and he oversees several collaborations with the HPC industry as well as academia.

Prior to becoming CTO for LC, Bronis led several research projects in LLNL's Center for Applied Scientific Computing. Most recently, he led the Exascale Computing Technologies (ExaCT) project and co-led the Advanced Scientific Computing (ASC) program's Application Development Environment and Performance Team (ADEPT). ADEPT is responsible for the development environment, including compilers, tools and run time systems, on LLNL's large-scale systems. ExaCT explored several critical directions related to programming models, algorithms, performance, code correctness and resilience for future large scale systems. He currently continues his interests in these topics, particularly programming models, and serves as the Chair of the OpenMP Language Committee.

Bronis earned his Ph.D. in Computer Science from the University of Virginia in 1998 and he joined CASC in July 1998. In addition to his work with LLNL, Bronis is also a Professor of Exascale Computing at Queen's University of Belfast and an Adjunct Associate Professor in the Department of Computer Science and Engineering at Texas A&M University. Throughout his career, Bronis has won several awards, including the prestigious Gordon Bell Prize in 2005 and 2006, as well as an R&D 100 for his leadership of a team that developed a novel scalable debugging tool. He is a member of the ACM and the IEEE Computer Society.

Faculty Contacts: Dr. Lawrence Rauchwerger

### Crayon: Saving Power through Shape and Color Approximation on Next-Generation Displays

Phillip Stanley-Marbell
Researcher
Computer Science and Artificial Intelligence Laboratory
MIT

4:10pm Monday, February 22, 2016
Room 124 HRBB

Abstract

Because displays account for a large fraction of the power dissipation in smart watches, mobile phones, and tablets, it is important and interesting to develop techniques to reduce display power dissipation.

Crayon is a set of tools and a runtime system we are developing, that can be inserted transparently into an operating system’s user interface pipeline, or applied offline to application assets. Crayon reduces display power dissipation when users accept trading display quality for longer battery life. It works by exploiting three fundamental properties: The limited ability of humans to resolve small changes in shape and color, the image-content-dependence of the power dissipation of new display technologies such as DLP pico-projectors and OLED displays, and the low cost of computation relative to display power savings.

We have implemented and evaluated Crayon in four contexts: two prototype hardware platforms which enable detailed power measurements and have OLED and DLP displays, an Android tablet, and a set of cross-platform tools. In exchange for minimally-perceptible visual artifacts, Crayon reduces display power dissipation by 55 percent on average. We show that these savings come at low overheads and we quantify the acceptability of Crayon’s optimizations using using a perceptual study involving over 400 participants and over 21 thousand image evaluations by participants.

This is joint work with Virginia Estellers (UCLA) and Martin Rinard (MIT).

Biography

Phillip Stanley-Marbell is a researcher at the Massachusetts Institute of Technology, Cambridge, MA, USA. He received his Ph.D. from Carnegie Mellon University in 2007 and was a post-doctoral researcher at TU Eindhoven until 2008, when he joined IBM Research---Zurich as a permanent Research Staff Member. In 2012 he joined Apple Inc. in Cupertino, USA, to see his research ideas deployed in real-world products. Prior to completing his Ph.D., he held intern and full-time positions at AT&T / Lucent Bell Labs, Philips Consumer Communications, Lucent's Data Networking Group, and NEC Research Labs.

Dr. Stanley-Marbell is the author of a programming language textbook published by John Wiley & Sons in 2003, and of over thirty scientific publications and over a dozen patents / patent applications. He is a member of the ACM, IEEE, Sigma Xi, USENIX, and the Swiss Mathematical Society. From 2003--2004, he served as the copy editor for the ACM Mobile Computing and Communications Review journal.

His research interests are in exploiting the physics of signals in nature and the flexibility of human perception to make computation more energy-efficient. His recent work explores these interests through the design of programming language constructs, algorithms, system architectures, and circuit implementations that permit as many errors as the constraints of the input signals or the consumers of the system’s output can tolerate.

Faculty Contacts: Dr. Dilma Da Silva

### Interactive Sound Simulation for Engineering Design and Virtual Environments

Dinesh Manocha
Phi Delta Theta/Mason Distinguished Professor
Department of Computer Science
University of North Carolina, Chapel Hill

4:10pm Monday, February 29, 2016
Room 124 HRBB

Abstract

Extending the frontier of visual computing, sound simulation utilizes sound to communicate information to a user and offers an alternative means of visualization. By harnessing the sense of hearing, audio rendering can further enhance a user's experience in a multimodal virtual world and is required for immersive environments, computer games, engineering simulation, virtual training, and designing next generation human-computer interfaces.

In this talk, we will give an overview of our recent work on sound propagation, spatial sound,  and sound rendering.
These include generating realistic physically-based sounds from rigid body dynamics simulations and liquid sounds based on bubble resonance and coupling with fluid simulators.  We also describe new and fast algorithms for sound propagation based on improved wave-based techniques and  fast geometric sound propagation. Our algorithms improve the state of the art in sound propagation by almost 1-2 orders of magnitude and we demonstrate that it is possible to perform interactive propagation in complex, dynamic environments by utilizing the computational capabilities of multi-core CPUs and many-core GPUs.

We describe new techniques to compute personalized HRTFs and have integrated our algorithms the Oculus VR Headset. We also demonstrate applications to architectural acoustics, engineering design, computer gaming, spatial audio, and outdoor sound propagation.

Biography

Dinesh Manocha is currently the Phi Delta Theta/Mason Distinguished Professor of Computer Science  at the University of North Carolina at Chapel Hill.  He has received Junior Faculty Award, Alfred P. Sloan Fellowship, NSF Career Award, Office of Naval Research Young Investigator Award, Honda Research Initiation Award, Hettleman Prize for Scholarly Achievement.

Along with his students, Manocha has also received 14 best paper & panel awards at the leading conferences on graphics, geometric modeling, visualization, multimedia, and high-performance computing. He is a Fellow of ACM, AAAS, and IEEE. He received Distinguished Alumni Award from Indian Institute of Technology, Delhi.

Manocha has published more than 400 papers in the leading conferences and journals.  Some of the software systems related to collision detection, GPU-based algorithms, sound simulation, and geometric computing developed by his group have been downloaded by more than 150K users and are licensed to more than 50 leading companies in computer graphics, CAD, simulation, gaming, and robotics.  He has supervised 28 Ph.D. dissertations.

Faculty Contacts: Dr. Namcy M. Amato

### Fast Simulation of Complex Multiscale Phenomena

Ming C. Lin
John R. & Louise S. Parker Distinguished Professor
Department of Computer Science
University of North Carolina, Chapel Hill

2:20pm Tuesday, March 1, 2016
Room 124 HRBB

Abstract

From turbulent fluid flow to chaotic traffic patterns, many phenomena observed in nature and in society show complex emergent behavior on different scales. The modeling and simulation of such phenomena continues to intrigue scientists and researchers across different fields, from computational sciences, medicine, traffic engineering, urban planning, to social sciences. Understanding and reproducing the visual appearance and dynamic behavior of such complex phenomena through simulation is valuable for enhancing the realism of virtual scenes, for improving the efficiency of design evaluation, for planning of complex procedures, and for training of skilled personnel.  This is also essential for interactive applications, where it is impossible to manually animate all the possible interactions and anticipate all responses beforehand.

In this talk, I survey several recent advances that synthesize together macroscopic models of the large-scale flows and local representations of intricate behaviors to capture both the aggregate dynamics and fine-grained details of such phenomena with significantly accelerated performance on commodity hardware, as well as novel algorithms that integrate physics-based modeling and data-driven synthesis to solve challenging research problems. Some of the example dynamical systems that I will describe using these hybrid techniques include soft tissue modeling, turbulent fluid, granular flows, crowd simulation, traffic visualization, and multimodal interaction. I conclude by discussing some possible future directions.

Biography

Ming C. Lin is currently John R. & Louise S. Parker Distinguished Professor of Computer Science at the University of North Carolina (UNC), Chapel Hill.  She obtained her B.S., M.S., and Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley. She received several honors and awards, including the NSF Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, Beverly W. Long Distinguished Professorship 2007-2010, Carolina Women’s Center Faculty Scholar in 2008, UNC WOWS Scholar 2009-2011, IEEE VGTC Virtual Reality Technical Achievement Award in 2010, and nine best paper awards at international conferences.  She is a Fellow of ACM and IEEE.

Her research interests include physically-based modeling, virtual environments, sound rendering, haptics, robotics, and geometric computing. She has (co-)authored more than 250 refereed publications in these areas and co-edited/authored four books. She has served on over 150 program committees of leading conferences and co-chaired dozens of international conferences and workshops. She is a member of IEEE CS Board of Governor, Chair of 2015 IEEE Computer Society (CS) Transactions Operation Committee, and a former Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics (2011-2014).  She also has served on several Editorial Boards, steering committees and advisory boards of international conferences, government agencies, and industry.

Faculty Contact: Dr. Nancy M. Amato

### Quantum Simulation for Quantum Emulation

Peter Reynolds
Senior Research Scientist/Chief Scientist, Physical Sciences
Army Research Office

4:10pm Wednesday, March 2, 2016
Room 124 HRBB

Abstract

I will describe how the advent of laser cooling and trapping of atoms and ions progressed to the achievement of quantum degeneracy in ultra-dilute atomic gases, and then to their trapping in optical lattices.  These incredibly dilute gasses can be used in a variety of unexpected ways, from altering chemical reaction dynamics, to serving as concept atomtronic devices (generalizing electronics with the many additional degrees of freedom afforded by atoms), to the creation of physical embodiments of model Hamiltonians.  The latter is an intriguing way to "solve" numerically intractable models such as the Fermi Hubbard model, believed to be at the heart of high-temperature superconductivity. Competing phases are expected to be realized with the introduction of long-ranged dipolar interactions.  "Solving" intractable Hamiltonians in this manner is the essence of Quantum Emulation.

As the strength of the dipole moment is a key parameter in this competition, it is necessary to be able to predict such dipole moments accurately for the molecules that might be cooled and trapped in future optical lattice experiments.  The gold standard for accuracy in computation of properties like this is quantum Monte Carlo simulation.  This approach will be described, and I will show how we used it to compute the dipole moment for some specific molecules of interest to the community because they are candidates in optical lattice emulation experiments.  One molecule in particular, LiSr, is extremely weakly bound and has never been made, yet has desirable properties of coupling by both electric and magnetic interactions.  Traditional methods of computing the binding energy and the dipole moment converge very poorly for such weakly bound molecules.  Thus we use Quantum Simulation to inform future work in Quantum Emulation.

Biography

Born and raised in New York City, Dr. Reynolds graduated from the famed Bronx High School of Science. From there he went to the University of California at Berkeley, to take the still developing "Berkeley Physics Course." He completed the physics honors program "with great distinction," was a Regents Scholar, was inducted into Phi Beta Kappa, and was awarded the physics department Citation (for best student). He then attended MIT to study statistical physics. He was both an NSF and an IBM pre-doctoral Fellow. In 1979, Dr. Reynolds earned a Ph.D. for research in critical phenomena in disordered systems. After a stint as an assistant research professor at Boston University, he became a staff scientist in 1980 at Lawrence Berkeley Laboratory, where his interests shifted to quantum simulations.

From 1988 to 2003, Dr. Reynolds was an Office of Naval Research (ONR) program officer in atomic and molecular physics. His program led to the current excitement in ultracold-atom physics, including BEC. He was also the Navy Principal to the DoD High Performance Computing Modernization Program, serving as the Navy's science and technology advisor. In 2003, Dr. Reynolds joined the Army Research Office (ARO), where he has been a Program Manager, the Associate Director in the Physical Sciences Directorate, and Physics Division Chief, prior to assuming his current position as Senior Research Scientist.  In his current position he is works across the Army broadly helping to set strategic directions in research across the Physical Sciences.  He has had a principle role in expanding Army involvement in Quantum Information Science.

Faculty Contacts: Dr. Andreas Klappenecker

### No Littering!

Bjarne Stroustrup
Managing Director in the technology division of Morgan Stanley

4:10pm Monday, March 21, 2016
Room 124 HRBB

Slides to view

Abstract

You can write C++ programs that are statically type safe and have no resource leaks. You can do that simply, without loss of performance, and without limiting C++’s expressive power. This model for type- and resource-safe C++ has been implemented using a combination of ISO standard C++ language facilities, static analysis, and a tiny support library (written in ISO standard C++). This supports the general thesis that garbage collection is neither necessary nor sufficient for quality software. This paper describes the techniques used to eliminate dangling pointers and to ensure resource safety.

Biography

Bjarne Stroustrup is the designer and original implementer of C++ as well as the author of The C++ Programming Language (Fourth Edition), A Tour of C++, Programming: Principles and Practice using C++ (Second Edition), and many popular and academic publications. Dr. Stroustrup is a Managing Director in the technology division of Morgan Stanley in New York City as well as a visiting professor at Columbia University. He also retains a connection with Texas A&M University as University Distinguished Professor. He is a member of the US National Academy of Engineering, and an IEEE, ACM, and CHM fellow. His research interests include distributed systems, design, programming techniques, software development tools, and programming languages. He is actively involved in the ISO standardization of C++. He holds a masters in Mathematics from Aarhus University and a PhD in Computer Science from Cambridge University, where he is a member of Churchill College.

Faculty Contact: Dr. Lawrence Rauchwerger

### Exact Analysis of TTL Cache Networks

Florin Ciucu
Assistant Professor
Department of Computer Science
University of Warwick

4:10pm Wednesday, March 23, 2016
Room 124 HRBB

Abstract

In Time-to-Live (TTL) caches, objects are evicted upon the expiration of individual timers whereas "cache misses" reset those timers. TTL caches are not only practically relevant, as justified by DNS, web-caching, or OpenFlow implementations, but are also theoretically more appealing than classical cache models such as LRU or FIFO whose analysis is notoriously hard. Unlike the single-cache case, whose exact analysis is well understood, the network analysis typically relies on Poisson
approximations which can lend themselves to significant errors, e.g., 30% in the hit ratios.
In this talk we present an exact analysis of TTL cache networks in a quite general setting, in which networks have a feedforward structure, requests arrive as not-necessarily Poisson processes, whereas the class of the TTLs's distributions is large. Moreover, we consider several TTL models with different tradeoffs between the offered consistency guarantees and the hit ratios. Lastly, we consider a novel TTL policy which learns the (unknown) popularities of objects, such that in the limit it behaves as the perfect but hypothetical LFU policy.

Biography

Florin Ciucu was educated at the Faculty of Mathematics, University of Bucharest (B.Sc. in Informatics, 1998), George Mason University (M.Sc. in Computer Science, 2001), and University of Virginia (Ph.D. in Computer Science, 2007). Between 2007 and 2008 he was a Postdoctoral Fellow in the Electrical and Computer Engineering Department at the University of Toronto. Between 2008 and 2013 he was a Senior Research Scientist at Telekom Innovation Laboratories (T-Labs) and TU Berlin. Currently he is an Assistant Professor in the CS department at the University of Warwick. His research interests are in the stochastic analysis of communication networks, resource allocation, and randomized algorithms. He has served on the Technical Program Committee of several conferences including IEEE Infocom, ACM Sigmetrics, IFIP Performance, ACM e-Energy, or ACM Mobihoc. Florin is a recipient of the ACM Sigmetrics 2005 Best Student Paper Award and IFIP Performance 2014 Best Paper Award.

### Perception of People and Scenes for Robot Learning from Demonstration

Associate Professor
Department of Computer Science and Engineering
University of Michigan

4:10pm Wednesday, April 6, 2016
Room 124 HRBB

Abstract

We are at the dawn of a robotics revolution where the visions of interconnected heterogeneous robots in widespread use will become a reality. Similar to "app stores" for modern computing, people at varying levels of technical background will contribute to "robot app stores" as designers and developers. However, current paradigms to program robots beyond simple cases remains inaccessible to all but the most sophisticated of developers and researchers.

In order for people to fluently program autonomous robots, a robot must be able to interpret commands that accord with a human’s model of the world. The challenge is that many aspects of such a model are difficult or impossible for the robot to sense directly. We posit the critical missing component is the grounding of symbols that conceptually tie together low-level perception with user programs and high-level reasoning systems. Such a grounding will enable robots to perform tasks that require extended goal-directed autonomy as well as fluidly work with human partners.

Towards making robot programming more accessible and general, I will present our work on improving perception of people and scenes to enable robot learning from human demonstration. Robot learning from demonstration (LfD) has emerged as a compelling alternative to explicit coding in a programming language, where robots are programmed implicitly from a user’s demonstration. Phrasing LfD as a statistical regression problem, our multivalued regression algorithms will be presented for learning robot controllers in the face of perceptual aliasing. I will also describe how such regressors can be used within physics-based estimation systems to learn controllers for humanoids from monocular video of human motion. With respect to learning for sequential manipulation tasks, our recent work aims to perceive axiomatic descriptions of scenes from depth for planning goal-directed behavior.

Biography

Odest Chadwicke Jenkins, Ph.D., is an Associate Professor of Computer Science at the University of Michigan.  Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). Prof. Jenkins was selected as a Sloan Research Fellow in 2009. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) for his work in physics-based human tracking from video. He has also received Young Investigator awards from the Office of Naval Research (ONR) for his research in learning dynamical primitives from human motion, the Air Force Office of Scientific Research (AFOSR) for his work in manifold learning and multi-robot coordination and the National Science Foundation (NSF) for robot learning from multivalued human demonstrations.

Faculty Contact: Dr. Nancy M. Amato and Dr. Dylan Shell

### Steering Large Swarms when Each Agent Gets the Same Input

Aaron Becker
Associate Professor
Department of Electrical and Computer Engineering
University of Houston

4:10pm Monday, April 11, 2016
Room 124 HRBB

Abstract

In the 2014 Disney movie Big Hero 6, the protagonist Hiro offers a profound view into the future by manu- facturing a swarm of 105 microbots. Hiro controls them to self-assemble, to build structures, and to transport goods and materials. While the “microrobots” of the film are fantasy, the ideas are rooted in reality. Pro- ducing large numbers of micro and nano bots is possible today. Micro-robots can be manufactured in large numbers by MEMS processes. Also, biological agents such as bacteria and paramecium [84, 85] can be grown to achieve large swarms.

My vision is for large swarms of robots remotely guided 1) through the human body, to cure disease, heal tissue, and prevent infection and 2) ex vivo to assemble structures in parallel. The biggest barrier to this vision is a lack of control techniques that can reliably exploit large populations despite incredible under-actuation.  Results are validated with hardware experiments using over 100 robots, extensive simulations, and over 10,000 human-user trials.  You can help — visit http://swarmcontrol.net and play some games.

Biography

Aaron T. Becker's passion is robotics and control. As an Assistant Professor in Electrical and Computer Engineering at the University of Houston, he is building the Robotic Swarm Control Lab, and is a 2016 NSF CAREER award recipient.

Previously as a Research Fellow in a joint appointment with Boston Children's Hospital and Harvard Medical School, he implemented robotics powered and controlled by the magnetic field of an MRI, as a member of the Pediatric Cardiac Bioengineering Lab with Pierre Dupont. As a Postdoctoral Research Associate at Rice University in the Multi-Robot Systems Lab with James McLurkin, Aaron investigated control of distributed systems and nanorobotics with experts in the fields. His online game http://swarmcontrol.net seeks to understand the best ways to control a swarm of robots by a human. The project achieves this through a community of game-developed experts.

Aaron earned his PhD in Electrical & Computer Engineering at the University of Illinois at Urbana-Champaign, advised by Tim Bretl.

Faculty Contact: Dr. Dezhen Song

### Evolution of MATLAB

Cleve Moler
Chief Mathematician
MathWorks

5:00pm Monday April 25, 2016
Hawking Auditorium - Mitchell Physics Building

Abstract

We show how MATLAB has evolved over more than 30 years from a simple matrix calculator to a powerful technical computing environment. We demonstrate several examples of MATLAB applications.  We conclude with a discussion of current developments, including parallel computation on multicore, multicomputer, and cloud systems.

Biography

Cleve Moler is the original author of MATLAB and one of the founders of the MathWorks.  He is currently chairman and chief scientist of the company, as well as a member of the National Academy of Engineering and past president of the Society for Industrial and Applied Mathematics.

Faculty Contact: Dr. Tim Davis