2008-2009 Distinguished Lecture Abstracts

Information Visualization for Knowledge Discovery

Dr. Ben Shneiderman,
Founding Director of the Human-Computer Interaction Lab
Department of Computer Science
The University of Maryland

4:10 p.m., Wednesday January 21, 2009
Room 124, Bright Building


Interactive information visualization tools provide researchers with remarkable capabilities to support discovery. By combining powerful data mining methods with user-controlled interfaces, users are beginning to benefit from these potent telescopes for high-dimensional data. They can begin with an overview, zoom in on areas of interest, filter out unwanted items, and then click for details-on-demand. With careful design and efficient algorithms, the dynamic queries approach to data exploration can provide 100msec updates even for million-record databases.

This talk will start by reviewing the growing commercial success stories such as www.spotfire.com, www.smartmoney.com/marketmap and www.hivegroup.com. Then it will cover recent research progress for visual exploration of large time series data applied to financial, medical, and genomic data (www.cs.umd.edu/hcil/timesearcher).

Our next step was to combine these key ideas to produce the Hierarchical Clustering Explorer 3.0 that now includes the rank-by-feature framework (www.cs.umd.edu/hcil/hce). By judiciously choosing from appropriate ranking criteria for low-dimensional axis-parallel projections, users can locate desired features of higher dimensional spaces. Finally, these strategies of unifying statistics with visualization are applied to network data and electronic health records (www.cs.umd.edu/hcil/lifelines2). Demonstrations will be shown.


Ben Shneiderman is a Professor in the Department of Computer Science and Founding Director (1983-2000) of the Human-Computer Interaction Laboratory at the University of Maryland. He was elected as a Fellow of the Association for Computing (ACM) in 1997 and a Fellow of the American Association for the Advancement of Science (AAAS) in 2001. He received the ACM SIGCHI Lifetime Achievement Award in 2001.

Ben is the author of "Designing the User Interface: Strategies for Effective Human-Computer Interaction" (5th ed. March 2009, forthcoming). With S. Card and J. Mackinlay, he co-authored "Readings in Information Visualization: Using Vision to Think" (1999). With Ben Bederson he co-authored "The Craft of Information Visualization" (2003). His book "Leonardo's Laptop" appeared in October 2002 (MIT Press) and won the IEEE book award for Distinguished Literary Contribution.

Faculty Contact: Dr. Andruid Kerne

Camera Networks for Security Applications

Dr. Nikolaos Papanikolopoulos,
Distinguished McKnight University Professor, IEEE Fellow
Director of the Center for Distributed Robotics and SECTTRA
Department of Computer Science and Engineering
The University of Minnesota

4:10 p.m., Monday January 26, 2009
Room 124, Bright Building


Algorithmical and hardware advances create many opportunities for image- and vision-based intelligent systems that are human-centric. Computing is ubiquitous in every household. Computers are becoming smaller, more portable, and embedded in many common appliances and devices. In addition, digital cameras are becoming pervasive in society. They are appearing in many varieties, and are embedded in many devices from cars to telephones.

This talk focuses on the problem of camera networks for security applications. We will present the Hyperion framework (deployed to several Mass Transit sites around the U.S) which involves the computation of an extensive set of video-analytics based on human and crowd activity monitoring, automatic camera placement, camera-to-camera tracking, semi-autonomous calibration, and video forensics analysis. An innovative user interface allows a single user to monitor thousands of cameras. We augment the system capabilities by pairing cameras with robots in order to provide swift mobility in case that the data requires so. Finally, we try to create an engineering/scientific solution which is respectful of design, privacy, and societal issues.


Nikos P. Papanikolopoulos (IEEE Fellow) received the Diploma degree in electrical and computer engineering from the National Technical University of Athens, Greece, in 1987, the M.S.E.E., and the Ph.D. in electrical and computer engineering from Carnegie Mellon University in 1988 and 1992, respectively. Currently, he is the Distinguished McKnight University Professor in the Department of Computer Science and Engineering at the University of Minnesota and Director of the Center for Distributed Robotics and SECTTRA. His research interests include robotics, computer vision, sensors for transportation applications, and control. He has authored or coauthored more than 200 journal and conference papers in the above areas (fifty five refereed journal papers). He was finalist for the Anton Philips Award for Best Student Paper in the 1991 IEEE Int. Conf. on Robotics and Automation (ICRA) and recipient of the best Video Award in the 2000 IEEE Int. Conf. on Robotics and Automation. He was a McKnight Land-Grant Professor at the University of Minnesota for the period 1995-1997 and has received the NSF Research Initiation and Early Career Development Awards. He was also awarded the Faculty Creativity Award from the University of Minnesota. One of his papers (co-authored by O. Masoud) was awarded the IEEE VTS 2001 Best Land Transportation Paper Award. He has received grants from DARPA, DHS, U.S. Army, U.S. Air Force, Sandia National Laboratories, NSF, Johnson Controls, Microsoft, INEEL, USDOT, MN/DOT, Honeywell, and 3M.

Faculty Contact: Dr. Dezhen Song

Spending Moore's Dividend

Dr. Jim Larus, Director of Software, Data Center Futures Project, Microsoft Research

4:10 p.m., Monday February 2, 2009
Room 124, Bright Building


Over the past three decades, regular, predictable improvements in computers were the norm. This progress is attributable to Moore's Law, the steady 40% per year increase in the number transistors per unit area. These decades were the period in which the personal computer and packaged software industries were born and matured. Software development was facilitated by the comforting knowledge that every generation of processors would run much faster than its predecessor.

This era is over and the industry has embarked on a historic transition from sequential to parallel computation. The introduction of mainstream parallel (multicore) processors in 2004 marked the end of a remarkable 30-year period during which sequential computer performance increased 40 – 50% per year. Fortunately, Moore's Law has not been repealed. Semiconductor technology is still doubling the transistors on a chip every two years. However, this flood of transistors is now used to increase the number of independent processors on a chip, rather than making an individual processor run faster. The challenge that the industry now faces is how to make parallel computing mainstream. This talk looks at one facet of this problem by asking how software consumed previous performance growth and whether multicore processors can satisfy the same needs. In short, how did we spend dividends of Moore's law, and what can we do in the future?


James Larus is Director of Software Architecture for the Data Center Futures team in Microsoft Research. Larus has been an active contributor to the programming languages, compiler, and computer architecture communities. He has published many papers and served on numerous program committees and NSF and NRC panels. Larus became an ACM Fellow in 2006.

Larus joined Microsoft Research as a Senior Researcher in 1998 to start and, for five years, led the Software Productivity Tools (SPT) group, which developed and applied a variety of innovative techniques in static program analysis and constructed tools that found defects (bugs) in software. This group's research has both had considerable impact on the research community, as well as being shipped in Microsoft products such as the Static Driver Verifier and FX/Cop and other, widely-used internal software development tools. Larus then became the Research Area Manager for programming languages and tools and started the Singularity research project, which demonstrated that modern programming languages and software engineering techniques could fundamentally improve software architectures.

Before joining Microsoft, Larus was an Assistant and Associate Professor of Computer Science at the University of Wisconsin-Madison, where he published approximately 60 research papers and co-led the Wisconsin Wind Tunnel (WWT) research project with Professors Mark Hill and David Wood. WWT was a DARPA and NSF-funded project investigated new approaches to simulating, building, and programming parallel shared-memory computers. Larus's research spanned a number of areas: including new and efficient techniques for measuring and recording executing programs' behavior, tools for analyzing and manipulating compiled and linked programs, programming languages for parallel computing, tools for verifying program correctness, and techniques for compiler analysis and optimization.

Larus received his MS and PhD in Computer Science from the University of California, Berkeley in 1989, and an AB in Applied Mathematics from Harvard in 1980. At Berkeley, Larus developed one of the first systems to analyze Lisp programs and determine how to best execute them on a parallel computer.

Faculty Contact: Dr. Lawrence Rauchwerger

Science, Computational Science, and Computer Science

Dr. Peter Freeman, Emeritus Dean and Professor, Georgia Institute of Technology, and (Former) Assistant Director, National Science Foundation

4:10 p.m., Wednesday February 25, 2009
Room 124, Bright Building


This talk will explore some aspects of and relationships between science, computational science, and computer science. Broadly speaking, scientific research is the systematic development of information about a subject and the formation of explanations that permit generalizations and predictions. For almost fifty years, computer simulation techniques have extended the impact of computer science on research and tools for computation based on computer science - languages, computers, systems, and so on - have contributed extensively to scientific research for even longer. The focus in computational science today is often only on simulation; a mistake in this speaker's view. Further, the essence of computer science is much deeper and broader just providing tools. On the theory formation side,when one looks at the nature of some of the most important and challenging problems in science, it is easy to see how the essence of computer science provides intellectual tools that will be essential to dealing with them. In short, we have only begun to understand the impact of computer science on the conduct of science (and, of course, engineering).


Peter A. Freeman is an Emeritus Dean and Professor at Georgia Tech and a Director of the Washington Advisory Group, specializing in strategic guidance for organizations involved in research, education, and development. He was Assistant Director of the National Science Foundation from 2002-2007 in charge of computing research. Dr. Freeman has been involved in computing and computer science since 1961, and was in the first Computer Science Ph.D. class at Carnegie Mellon University. He lives in Washington D.C.

CPSC 681 Faculty Contact: Dr. Robin Murphy

Reinventing Computing

This lecture is also a part of The Institute for Applied Mathematics and Computational Sciences (IAMCS) and King Abdullah University of Science and Technology (KAUST) Lectures.

Dr. Burton J. Smith, Technical Fellow, Microsoft

4:10 p.m., Monday November 17, 2008
Room 124, Bright Building


The many-core inflection point presents a new challenge for our industry, namely general-purpose parallel computing. Unless this challenge is met, the continued growth and importance of computing itself and of the businesses engaged in it are at risk. We must make parallel programming easier and more generally applicable than it is now, and build hardware and software that will execute arbitrary parallel programs on whatever scale of system the user has. The changes needed to accomplish this are significant and affect computer architecture, the entire software development tool chain, and the army of application developers that will rely on those tools to develop parallel applications. This talk will point out a few of the hard problems that face us and some prospects for addressing them.


Burton J. Smith, Technical Fellow for Microsoft Corporation, works with various groups within the company to help define and expand efforts in the areas of parallel and high performance computing. He reports directly to Craig Mundie, Chief Research and Strategy Officer for the company.

Smith is recognized as an international leader in high performance computer architecture and programming languages for parallel computers. Before joining Microsoft in December 2005 he co-founded Cray Inc., formerly Tera Computer Company. From its inception in1988, Smith served as its chief scientist, a member of the board of directors, and its chairman until 1999. Before that, Smith spent six years with Denelcor, Inc. as Vice President of Research & Development and three years as a Fellow of the Institute for Defense Analyses Supercomputing Research Center. From 1970 to 1979 he taught at the Massachusetts Institute of Technology and the University of Colorado.

In 2003, Smith received the Seymour Cray Computing Engineering Award from the IEEE Computer Society and was elected to the National Academy of Engineering. He received the Eckert-Mauchly Award in 1991 given jointly by the Institute for Electrical and Electronic Engineers and the Association for Computing Machinery and was elected a fellow of each organization in 1994. Smith attended the University of New Mexico, where he earned a BSEE degree, and the Massachusetts Institute of Technology, where he earned SM, EE, and Sc.D degrees.

Faculty Contact: Dr. Lawrence Rauchwerger