Skip To Main Content

Hybrid Programming Challenges for Extreme Scale Software

Vivek Sarkar
E.D. Butcher Chair in Engineering,
Professor & Chair of the Department of Computer Science at Rice University,
and Associate Director, Center for Domain-Specific Computing

4:10pm Monday, September 8, 2014
Room 124 HRBB

Abstract

It is widely recognized that computer systems in the next decade will be qualitatively different from current and past computer systems. Specifically, they will be built using homogeneous and heterogeneous many-core processors with 100's of cores per chip, their performance will be driven by parallelism (million-way parallelism just for a departmental server), and constrained by energy and data movement. They will also be subject to frequent faults and failures. Unlike previous generations of hardware evolution, these Extreme Scale systems will have a profound impact on future software. The software challenges are further compounded by the need to support new workloads and application domains that have traditionally not had to worry about parallel computing in the past.

In general, a holistic redesign of the entire software stack is needed to address the programmability and performance requirements of Extreme Scale systems. This redesign will need to span programming models, languages, compilers, runtime systems, and system software. A major challenge in this redesign arises from the fact that current programming systems have their roots in execution models that focused on homogeneous models of parallelism e.g., OpenMP's roots are in SMP parallelism, MPI's roots are in cluster parallelism, and CUDA and OpenCL's roots are in GPU parallelism. This in turn leads to the "hybrid programming" challenge for application developers, as they are forced to explore approaches to combine two or all three of these models in the same application. Despite some early experiences and attempts by some of the programming systems to broaden their scope (e.g., addition of accelerator pragmas to OpenMP), hybrid programming remains an open problem and a major obstacle for application enablement on future systems.

In this talk, we summarize experiences with hybrid programming in the Habanero Extreme Scale Software Research project [1] which targets a wide range of homogeneous and heterogeneous many-core processors in both single-node and cluster configurations. We focus on key primitives in the Habanero execution model that simplify hybrid programming, while also enabling a unified runtime system for heterogeneous hardware. Some of these primitives are also being adopted by the new Open Community Runtime (OCR) open source project [2]. These primitives form the basis for a sophomore-level course on parallel programming at Rice University, and have been validated in a range of applications including medical imaging applications studied in the NSF Expeditions Center for Domain-Specific Computing (CDSC) [3]. The OCR runtime is also used as the foundation for the newly-created Habanero-C and Habanero-C++ libraries.

Background material for this talk will be drawn in part from the DARPA Exascale Software Study report [4] led by the speaker. This talk will also draw from a recent study led by the speaker on Synergistic Challenges in Data-Intensive Science and Exascale Computing [5] for the US Department of Energy's Office of Science. We would like to acknowledge the contributions of all participants in both studies, as well as the contributions of all members of the Habanero, OCR, and CDSC projects.

Biography

Vivek Sarkar conducts research in multiple aspects of parallel software including programming languages, program analysis, compiler optimizations and runtimes for parallel and high performance computer systems. He currently leads the Habanero Extreme Scale Software Research project at Rice University, and serves as Associate Director of the NSF Expeditions project on the Center for Domain-Specific Computing.

Prior to joining Rice in July 2007, Vivek was Senior Manager of Programming Technologies at IBM Research. His responsibilities at IBM included leading IBM's research efforts in programming model, tools, and productivity in the PERCS project during 2002- 2007 as part of the DARPA High Productivity Computing System program. His past projects include the X10 programming language, the Jikes Research Virtual Machine for the Java language, the MIT RAW multicore project, the ASTI optimizer used in IBM's XL Fortran product compilers, the PTRAN automatic parallelization system, and profile-directed partitioning and scheduling of Sisal programs.

Vivek holds a B.Tech. degree from the Indian Institute of Technology, Kanpur, an M.S. degree from University of Wisconsin-Madison, and a Ph.D. from Stanford University. He became a member of the IBM Academy of Technology in 1995, the E.D. Butcher Chair in Engineering at Rice University in 2007, and was inducted as an ACM Fellow in 2008. Vivek has been serving as a member of the US Department of Energy's Advanced Scientific Computing Advisory Committee (ASCAC) since 2009. He has also become the chair of the Computer Science Department at Rice University since July 2013.

Faculty Contact: Dr. Lawrence Rauchwerger (rwerger [at] cse.tamu.edu)


Current Trends in Parallel Numerical Computing and Challenges for the Future

Jack Dongarra
University Distinguished Professor
Department of Electrical Engineering and Computer Science
University of Tennessee

4:10pm Monday, October 27, 2014
Room 124 HRBB

Abstract

In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our numerical scientific software. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile-time and run-time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run-time environment variability will make these problems much harder.

Biography

Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D.
in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Computer Science Department at the University of Tennessee and holds the title of Distinguished Research Staff in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow at Manchester University, and an Adjunct Professor in the Computer Science Department at Rice
University. He is the director of the Innovative Computing Laboratory at the University of Tennessee. He is also the director of the Center for Information Technology Research at the University of Tennessee which coordinates and facilitates IT research efforts at the University.

He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches; in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing; in 2010 he was the first recipient of the SIAM Special Interest Group on Supercomputing's award for Career Achievement; in 2011 he was the recipient of the IEEE IPDPS Charles Babbage Award; and in 2013 he was the recipient of the ACM/IEEE Ken Kennedy Award for
his leadership in designing and promoting standards for mathematical software used to solve numerical problems common to high performance computing. He is a Fellow of the AAAS, ACM, IEEE, and SIAM and a member of the National Academy of Engineering.

Faculty Contact: Dr. Lawrence Rauchwerger (rwerger [at] cse.tamu.edu)


Cyber-Physical Systems with Human in the Loop

Ruzena Bajcsy
NEC Distinguished Professor
Electrical Engineering and Computer Sciences
Director of CITRIS
University of California, Berkeley

4:10pm Wednesday, January 21, 2015
Room 2005 Emerging Technologies Building

Abstract

We are interested in the Dynamic Interaction of Physical systems and Humans. The approach we are taking is of modeling kinematics and dynamics of the human activity as they interact with the semi-autonomous systems. We are asking the questions when should be the human in control and when is appropriate to leave the control to the autonomous system, in other words how should they cooperate. We use motion capture, stereo vision and body sensors (accelerometers, EMG) as observables for modeling the human activity. Since humans are a complex kinematic system we take advantage of finding the most informative joints for given activity. In addition to these most informative joints we also are taking into consideration the velocity vectors at each joint as well as acceleration, strength of muscle and torque forces for identifying the most informative moving joints. This methodology enables us to segment the human activity in a natural non-ad hoc way and model these movement primitives as linear continues systems/modes which are then connected by switching mechanisms into a hybrid system for given activity and/or interaction.

The theoretical findings will be documented by two applications: one modeling the driver and the car and the other observing the elderly exercising and the coach (automated or human) intervening as needed.

Biography

Dr. Ruzena Bajcsy (“buy chee”) was appointed Director of CITRIS and professor of EECS department at the University of California, Berkeley on November 1, 2001.   Prior to coming to Berkeley, she was Assistant Director of the Computer Information Science and Engineering Directorate (CISE) between December 1, 1998 and September 1, 2001.  As head of National Science Foundation’s CISE directorate, Dr. Bajcsy managed a $500 million annual budget.  She came to the NSF from the University of Pennsylvania where she was a professor of computer science and engineering since 1972. In 2004 she became a CITRIS director emeritus and now she is a full time NEC Distinguished professor of EECS.

Dr. Bajcsy is a pioneering researcher in machine perception, robotics and artificial intelligence.  She is a NEC Distinguished professor in the  Electrical Engineering and Computer  Science Department at Berkeley.   She was also Director of the University of Pennsylvania’s General Robotics and Active Sensory Perception Laboratory, which she founded in 1978.

Dr. Bajcsy has done seminal research in the areas of human-centered computer control, cognitive science, robotics, computerized radiological/medical image processing and artificial vision.  She is highly regarded, not only for her significant research contributions, but also for her leadership in the creation of a world-class robotics laboratory, recognized world wide as a premiere research center.  She is a member of the National Academy of Engineering, as well as the Institute of Medicine. She is a recipient of Franklin Medal 2009 and a member of the American Philosophical Society, established by Benjamin Franklin since 2005. She is also a member of the American Academy of Arts and Sciences since 1998. She is especially known for her wide-ranging, broad outlook in the field and her cross-disciplinary talent and leadership in successfully bridging such diverse areas as robotics and artificial intelligence, engineering and cognitive science.

Dr. Bajcsy received her master’s and Ph.D. degrees in electrical engineering from Slovak Technical University in 1957 and 1967, respectively.  She received a Ph.D. in computer science in 1972 from Stanford University, and since that time has been teaching and doing research at Penn’s Department of Computer and Information Science.  She began as an assistant professor and within 13 years became chair of the department.  Prior to her work at the University of Pennsylvania, she taught during the 1950s and 1960s as an instructor and assistant professor in the Department of Mathematics and Department of Computer Science at Slovak Technical University in Bratislava.  She has served as advisor to more than 50 Ph.D. recipients.  In 2001 she received an honorary doctorate from University of Ljubljana in Slovenia and Lehigh University. In 2001 she became a recipient of the ACM A. Newell award. In 2012 she received an honorary degree from University of Pennsylvania in Philadelphia and honorary degree from Technical University in Stockholm (KTH).

Faculty Contact: Dr. Nancy M. Amato (amato [at] cse.tamu.edu)


Impact of Software-Defined Networks on Next Generation Real-Time Applications

Klara Nahrstedt
Professor
University of Illinois at Urbana-Champaign

4:10pm Wednesday, March 25, 2015
Room 124 HRBB

Abstract

Current real-time applications such as telepresence systems require a very strong real-time interactivity. Requirements are even more stringent in multi-stream and multi-site teleimmersive applications due to strong dependencies across geographically distributed streams. In this talk, I will talk about the impact of new networking paradigm, the Software-Defined Networks (SDN), on the next generation real-time applications. I will discuss OpenSession, the new ‘Northbound’ session-network control plane for multi-stream and multi-site real-time applications, which represents the interaction between the application-level controller and SDN controller. Furthermore, OpenSession aims to improve interactivity, resource utilization and scalability by decoupling application layer data and control plane functionality, and partially offload the data plane functionalities to network layer switches. The control of network switches during the session run-time happens via OpenSession which then leverages the Software-Defined Networking (e.g., OpenFlow) assistance. The experiments with the session-network control are very encouraging, since OpenSession improves the performance, interactivity and resources usage of our real-time applications such as the 3D Teleimmersion at the data plane. 

Biography

Klara Nahrstedt is the Ralph and Catherine Fisher Professor in the Computer Science Department, and Interim Director of Coordinated Science Laboratory in the College of Engineering at the University of Illinois at Urbana-Champaign. Her research interests are directed toward 3D teleimmersive systems, mobile systems, Quality of Service (QoS) and resource management, Quality of Experience in multimedia systems, and real-time security in mission-critical systems. She is the co-author of widely used multimedia books `Multimedia: Computing, Communications and Applications' published by Prentice Hall, and ‘Multimedia Systems’ published by Springer Verlag. She is the recipient of the IEEE Communication Society Leonard Abraham Award for Research Achievements, University Scholar, Humboldt Award, IEEE Computer Society Technical Achievement Award, ACM SIGMM Technical Achievement Award, and the former chair of the ACM Special Interest Group in Multimedia. She was the general chair of ACM Multimedia 2006, general chair of ACM NOSSDAV 2007 and the general chair of IEEE Percom 2009.

Klara Nahrstedt received her Diploma in Mathematics from Humboldt University, Berlin, Germany in numerical analysis in 1985. In 1995 she received her PhD from the University of Pennsylvania in the Department of Computer and Information Science. She is ACM Fellow, IEEE Fellow, and Member of the Leopoldina German National Academy of Sciences. 

Faculty Contact: Dr. Lawrence Rauchwerger (rwerger [at] cse.tamu.edu)


Technical Solutions Underlying Wireless Health Systems

John A. Stankovic
BP America Professor
University of Virginia

4:10pm Monday, March 30, 2015
Room 124 HRBB

Abstract

Various types of wireless health systems have been deployed in 1000s of homes. Thousands of wellness apps are available for smart phones. However, the purpose, value, and capabilities of these systems span a very broad spectrum from the very general to the very specific. While the ultimate goal is good health, there are many underlying technical issues that must be solved. In this talk I present a collection of technical problems and solutions primarily for in-home health care and mobile based wellness apps. The technical topics include: a flexible in-home architecture called Empath and its use with real patients, a multi-level semantic-based classification and anomaly detection subsystem, and controlling heart rate with music. Open research questions will be mentioned throughout the talk.

Biography

Professor John A. Stankovic is the BP America Professor in the Computer Science Department at the University of Virginia. He served as Chair of the department, completing two terms (8 years).  He is a Fellow of both the IEEE and the ACM. He won the IEEE Real-Time Systems Technical Committee's Award for Outstanding Technical Contributions and Leadership. He also won the IEEE Distributed Processing Technical Committee’s Award for Distinguished Achievement (inaugural winner). He has won seven best paper awards and several best paper runner-up awards in wireless sensor networks research. He is highly cited (h-index is 102) and presented many Invited Keynotes and Distinguished Lectures. Professor Stankovic also served on the Board of Directors of the Computer Research Association for 9 years. Currently, he serves on the National Academy’s Computer Science and Telecommunications Board. He was awarded the University of Virginia, School of Engineering Distinguished Faculty Award. Before joining the University of Virginia, Professor Stankovic taught at the University of Massachusetts where he won an outstanding scholar award. He was the Editor-in-Chief for the IEEE Transactions on Distributed and Parallel Systems and was a founder and co-editor-in-chief for the Real-Time Systems Journal. His research interests are in wireless health, wireless sensor networks, cyber physical systems, distributed computing, and real-time systems. Prof. Stankovic received his PhD from Brown University.

Faculty Contact: Radu Stoleru (stoleru [at] cse.tamu.edu)


Elastic Software Infrastructure to Support Computing Clouds for Cyber-Physical Systems

Douglas C. Schmidt
Associate Chair of Computer Science and Engineering
Professor of Computer Science
Vanderbilt University

4:10pm Monday, April 27, 2015
Room 124 HRBB

Abstract

Cyber-Physical Systems (CPSs) are increasingly composed of services and applications deployed across a range of communication topologies, computing platforms, and sensing and actuation devices. These services and applications often form parts of multiple end-to-end cyber-physical flows (i.e., end-to-end task chains) that operate in resource constrained environments. In such operating conditions, each service within the end-to-end cyber-physical flows must process events belonging to other services or applications, while providing dependable quality of service (QoS) assurance (e.g., timeliness, reliability, and trustworthiness) within the constraints of limited resources, or with the ability to fail over to providers of last resort (e.g., a public utility in the case of a smart grid).

CPSs have traditionally been designed and implemented using resources procured and maintained in-house. Significant budget constraints are driving the researchers and practitioners to consider cost-effective alternatives, yet ensure mission- and safety-critical properties. The emergence of dependable computing clouds enable the consideration of new factors in the design and operation of CPSs, including offering economic incentives, aggregating and disaggregating behaviors dynamically to reduce risk, consolidating and sharing physical hardware among different applications to reduce power consumption and heat generation, and auto-scaling computing, communication, and even sensing and actuation resources on-demand, to ensure that CPSs can use the optimum number of resources without incurring costs when resources are idle.

Despite the promise held by cloud computing, however, supporting the dependability requirements of CPSs is hard. This talk will discuss a number of technical issues emerging in this context, including: Precise auto-scaling of resources with a system-wide focus, Flexible optimization algorithms to balance real-time constraints with cost and other goals, Improved fault-tolerance fail-over to support end-to-end real-time requirements, Data provisioning and load balancing algorithms that rely on physical properties of computations.

Biography

Dr. Douglas C. Schmidt is a Professor of Computer Science at Vanderbilt University and a Visiting Scientist at the Software Engineering Institute (SEI) at Carnegie Mellon University. He was previously the Chief Technology Officer at the SEI, a Program Manager at DARPA, and a member of the Air Force Scientific Advisory Board. Dr. Schmidt's research focuses on software patterns, optimization techniques, domain-specific modeling environments, and empirical analyses of middleware frameworks for distributed real-time embedded systems and mobile cloud computing. He has published 10 books and more than 500 technical papers and has mentored and graduated over 40 Ph.D. and Masters students.

In addition to his academic research, commercial experience, and government service, Dr. Schmidt has led the development of ACE, TAO, CIAO, and CoSMIC for the past two decades. These open-source middleware frameworks and model-driven tools constitute some of the most successful examples of middleware frameworks transitioned from research to industry, being widely used by thousands of companies and agencies worldwide in many domains, including national defense and security, datacom/telecom, financial services, healthcare, and online gaming. He received B.S. and M.A. degrees in Sociology from the College of William and Mary in Williamsburg, Virginia, and an M.S. and a Ph.D. in Computer Science from the University of California, Irvine (UCI) in 1984, 1986, 1990, and 1994, respectively.

Faculty Contact: Dr. Riccardo Bettati (bettati [at] cse.tamu.edu)