2016-2017

Spring 2017 Abstracts

Graduate Orientation I: Overview of Department Resources & Contacts, Honor Code, and Student Organizations


MANDATORY FOR NEW GRAD STUDENTS (but not other CSCE 681 students)

4:10-6:00 p.m., Monday August 29, 2017
Room 124, Bright Building

Abstract

This meeting will concentrate on the essentials the students will need to settle in. It will include an introduction to departmental administration (staff who's who, payroll, mailboxes, phones), the computing resources (computer use/accounts, printer quotas, lab access/tours), the academic advising staff and resources, the TAMU honor system, TAMU Libraries, Graduate Teaching Academy, Student Engineers' Council and relevant student organizations (CSEGSAAWICSTACS (TAMU ACM and IEEE student chapter), UPE and TAGD).


Graduate Orientation II: Presentation, Poster Session, & PIZZA!


MANDATORY FOR NEW GRAD STUDENTS and counts towards requirement for CSCE 681 students.

4:10-6:00 p.m., Wednesday August 31, 2017
Room 124, Bright Building

Abstract

  • 4:10-5:10 p.m. - Presentation
  • 5:10-6:00 p.m. - Pizza & Current Student Poster Session - new students can meet current grads and learn about ongoing research projects.

 

Shape Algebras in Computer Graphics

Ergun Akleman
Professor, Department of Visualization
Texas A&M University

4:10pm Wednesday, January 18, 2017
Room 124 HRBB

Abstract

In this talk, I will present the concept of shape algebra for effective algorithm and system development. Shape algebras naturally emerge as a result of the topological structure of the initial shapes and operations used to create new shapes. I observe that limiting initial shapes and operations is critical to avoid inconsistencies. For instance, in 2-manifold modeling systems, programmers commonly include some exceptions and provide operations that can create non-manifolds. Such exceptions and operations, which solve immediate practical concerns, make it harder to extend software without professional help and/or laborious effort.

I recently realized that shape algebras emerge as a common theme in many of my publications without explicitly referring them. In fact, I have initially designed shape algebras for implicit surfaces that can guarantee provide interactive modeling with control shapes. Later, I with Jianer Chen have designed a minimal set of operations over orientable 2-manifolds effectively describe a shape algebra for orientable 2-manifolds. Because of the robustness and simplicity of the algebra, our students at Texas A&M university, with minimal instruction, could make system grow. They added many high-level operations that are created as composite of minimal operations. When we share it in web, many people discovered TopMod and found ways to create unusual interesting shapes and shared their experiences by developing video tutorials. Unfortunately, the power of TopMod was limited by its underlying shape algebra that can only support orientable 2-manifolds.

Adding a single operator is sufficient to extend algebra to non-orientable surfaces, which is reminiscent of the introduction of complex numbers by allowing irrational power operation such as square root into the algebra. Immersions of non-orientable meshes in 3-space resulted woven objects that can be considered as 2-fold fabrics on polygonal meshes.  To go further and extend the algebra to obtain 3-manifold meshes, it turned out that we needed to add only one new operation and its inverse to the existing set of minimal operations. This is analogous to the hierarchy structure among real algebras, complex algebras, and quaternion algebras. In other words, this underlying model provides a strong representational power while using existing infrastructure in 2-manifold mesh modeling without causing a significant increase in computational expense for representing a variety of topologically distinct shapes.

Biography

Ergun Akleman is a Professor in the Departments of Visualization and Computer Science. He has been at Texas A&M University since 1995. He received his Ph.D. degree in Electrical and Computer Engineering from the Georgia Institute of Technology in 1992. He is also a professional cartoonist, illustrator and caricaturist who have published more than 500 cartoons, illustrations and caricatures. His research work is interdisciplinary, usually motivated by aesthetic concerns. He has published extensively in the areas of shape modeling, image synthesis, artistic depiction, image based lighting, texture and tiles, computer aided caricature, electrical engineering and computer aided architecture.

Faculty Contact: Dr. Lawrence Rauchwerger


My Smartphone Knows What You Print: Exploring Smartphone-based Side-channel Attacks Against 3D Printers

Kui Ren
Professor
University of Buffalo

4:10pm Monday, January 23, 2016
Room 124 HRBB

Abstract

Additive manufacturing, also known as 3D printing, has been increasingly applied to fabricate highly intellectual-property (IP) sensitive products. However, the related IP protection issues in 3D printers are still largely underexplored. On the other hand, smartphones are equipped with rich onboard sensors and have been applied to pervasive mobile surveillance in many applications. These facts raise one critical question: is it possible that smartphones access the side-channel signals of 3D printer and then hack the IP information? In this talk, we answer this by performing an end-to-end study on exploring smartphone-based side-channel attacks against 3D printers. Specifically, we formulate the problem of the IP side-channel attack in 3D printing. Then, we investigate the possible acoustic and magnetic side-channel attacks using the smartphone built-in sensors. Moreover, we explore a magnetic-enhanced side-channel attack model to accurately deduce the vital directional operations of 3D printer. Experimental results show that by exploiting the side-channel signals collected by smartphones, we can successfully reconstruct the physical prints and their G-code with Mean Tendency Error of 5.87% on regular designs and 9.67% on complex designs, respectively. Our study demonstrates this new and practical smartphone-based side channel attack on compromising IP information during 3D printing.

Biography

Kui Ren is a professor of Computer Science and Engineering and the director of UbiSeC Lab at State University of New York at Buffalo (UB). He received his Ph.D. from Worcester Polytechnic Institute. Ren's current research interest spans Cloud & Outsourcing Security, Wireless & Wearable Systems Security, and Mobile Sensing & Crowdsourcing. His research has been supported by NSF, DoE, AFRL, MSR, and Amazon. He received UB Exceptional Scholar Award for Sustained Achievement in 2016, UB SEAS Senior Researcher of the Year Award in 2015, Sigma Xi/IIT Research Excellence Award in 2012, and NSF CAREER Award in 2011. Ren has published constantly in top venues and received several Best Paper Awards including IEEE ICNP 2011. He currently serves as an associate editor for IEEE Transactions on Dependable and Secure Computing, IEEE Transactions on Mobile Computing, IEEE Wireless Communications, and IEEE Internet of Things Journal. Ren is a Fellow of IEEE, a Distinguished Lecturer of IEEE, a member of ACM, and a past board member of Internet Privacy Task Force, State of Illinois.

Faculty Contact: Dr. Guofei Gu


Human-Robot Interaction and Whole-Body Robot Sensing

Vladimir Lumelsky
Professor Emeritus
University of Wisconsin-Madison

4:10pm Wednesday, February 1, 2017
Room 124 HRBB

Abstract

The ability by a robot to operate in an uncertain environment, such as near humans or far away under human control, potentially opens a myriad uses. Examples include robots preparing the Mars surface for human arrival; robots for assembly of large space telescopes; robot helpers for the elderly; robotic search and disposal of war mines. So far advances in this area have been coming slowly, with a focus on specific tasks rather than a universal ability typical in nature. Challenges appear both on the robotics side and on human side: robots have hard time adjusting to an unstructured environment, whereas human cognition has serious limits in adjusting to robots and grasping complex 2D and 3D motion tasks. As a result, applications where robots operate near humans – or far away under their control – are exceedingly rare. The way out of this impasse is to supply the robot with a whole-body sensing - an ability to sense surrounding objects at the robot’s whole body - and algorithms capable of utilizing these data in real time. This calls for large-area flexible sensing arrays - sensitive skin covering the whole robot body akin to the skin covering the human body. Whole-body sensing brings interesting, even unexpected, properties: powerful robots become inherently safe; human operators can move them fast, with “natural” speeds; robot motion strategies exceed human spatial reasoning skills; it becomes realistic to utilize natural synergy of human-robot teams and allow a mix of supervised and unsupervised robot operation. We will review the cognitive science, mathematical, algorithmic, and hardware (materials, electronics, computing) issues involved in realizing such systems.

Biography

Vladimir Lumelsky is Professor Emeritus at the University of Wisconsin-Madison. His Ph.D. in Applied Mathematics is from the Institute of Control Sciences, Russian National Academy of Sciences, Moscow. He has held engineering, research, administrative, and faculty positions with Ford Motor Research Labs, General Electric Research Center, Yale University, University of Wisconsin-Madison, University of Maryland, NASA-Goddard Space Center, National Science Foundation. Concurrently he held visiting positions with the Tokyo Institute of Science, Japan; Weizmann Institute, Israel; USA-Antarctica South Pole Station.

He has served аs IEEE Sensors Council President; Founding Editor-in-Chief of IEEE Sensors Journal; chair and co-chair of major conferences; on Editorial Boards of IEEE Transactions on Robotics and Automation and other journals; on various governing committees of IEEE; and served as guest editor for special journal issues. He has authored over 200 publications (books, journal papers, conferences, reports); is IEEE Life Fellow, and member of ACM and SME.

Faculty Contact: Dr. Nancy M. Amato


Experiments in eScience – repeatability and reproducibility

Claudia Medeiros
UNICAMP

4:10pm Monday, February 6, 2017
Room 124 HRBB

Abstract

eScience can be defined as joint research in Computer Science and other domains, to let scientists from these domains conduct their research faster, better or in a different way, while at the same time advancing the state of the art in Computer Science. As such, it involves joint cooperation of computer scientists with researchers from,e.g., (A)rchaeology to (Z)oology, covering all knowledge domains.

While at the same time offering many exciting research opportunities for computer scientists, this poses very many challenges, including those inherent to the very nature of interdisciplinary research – such as issues of  distinct work methodologies, very different vocabularies, and heterogeneity at all levels (from work profiles to infrastructures, from data to devices,software and people-ware).

This talk will discuss one of these challenges – that of enabling experiment reproducibility for such heterogeneous research environments – and the many open problems whose solution benefit from innovative computer science research. It will draw from real life examples, of large applications in which I have worked.

Biography

Claudia Bauzer Medeiros is full  professor  of  Computer Science  at  the Institute of Computing (http://www.ic.unicamp.br/en), University  of Campinas (Unicamp), Brazil (http://www.unicamp.br).  She has received Brazilian and international awards for excellence in research, in teaching, and work in fostering the participation of women in computing, including the Change Agent Anita Borg Award, and the award from Google Brazil.  She is a Commander of the Brazilian Order of Scientific Merit and a former Distinguished Speaker of ACM. She was awarded a Doctor Honoris Causa by the Universidad Antenor Orrego, Peru, and  by the University Paris-Dauphine, France.

Her research is centered on the management and analysis of scientific data, to face the challenges posed by large, real world applications. This involves handling distributed and very heterogeneous data sources, at varying scales in space and time, ranging from satellite data to earthbound sensor networks. For the past 25 years, she has coordinated large multi-institutional, multidisciplinary projects in biodiversity, climate change and in agricultural and environmental planning, involving universities in Brazil, Germany, and France. In 1994, she created the Laboratory of Information Systems at Unicamp (http://www.lis.ic.unicamp.br), one of first research laboratories in Brazil dedicated to solving interdisciplinary problems involving scientific data.  From 2003 to 2007 she was the President of the Brazilian Computer Society. Since 1998, she has served as member of permanent scientific evaluation panels in Brazil, both at the national level (CAPES and CNPq) and at the state of São Paulo (FAPESP – http://www.fapesp.br/en/), where she coordinates the eScience program http://www.fapesp.br/en/escience).

Faculty Contact: Dr. Dilma Da Silva


Psychology and Intelligent User Interaction

Metin Sezgin
Koc University

4:10pm Wednesday, Febuary 15, 2017
Room 124 HRBB

Abstract

The recent advances in computer vision and machine learning coupled with cheaper hardware and abundant computational power has led to a surge in user interfaces that support new modes of interaction such as gestures and speech. Research in these new technologies was originally motivated by removing our dependency on traditional mouse and keyboard-based interaction. However, this effort resulted in mere substitution of us humans in for the hardware without significant changes in the interaction paradigms. In other words, rather than throwing away the mouse and the keyboard altogether, we simply “turned people into mice.” Now, there are renewed attempts to build natural and easy to use interfaces by combining machine learning and computer vision technologies with a deeper understanding of human psychology, usability, and human computer interaction. These efforts collectively define the field of intelligent user interfaces. In this talk, I will present case studies on intelligent user interfaces with a specific emphasis on how psychology can be a guide in building smart systems. 

Biography

T. Metin Sezgin graduated summa cum laude with Honors from Syracuse University in 1999. He completed his MS in the Artificial Intelligence Laboratory at Massachusetts Institute of Technology in 2001. He received his PhD in 2006 from Massachusetts Institute of Technology. He subsequently moved to University of Cambridge, and joined the Rainbow group at the University of Cambridge Computer Laboratory as a Postdoctoral Research Associate. Dr. Sezgin is currently an Associate Professor in the College of Engineering at Koç University, Istanbul. His research interests include intelligent human-computer interfaces, multimodal sensor fusion, and HCI applications of machine learning. Dr. Sezgin is particularly interested in applications of these technologies in building intelligent pen-based interfaces. Dr. Sezgin’s research has been supported by international and national grants including grants from European Research Council, and Turk Telekom. He is a recipient of the Career Award of the Scientific and Technological Research Council of Turkey.

Faculty Contact: Dr. Tracy Hammond


Towards unified control of prosthetic legs with human-inspired phasing and task adaptations

Robert Gregg
University of Texas at Dallas

4:10pm Wednesday, February 22, 2017
Room 124 HRBB

Abstract

The human gait cycle is typically viewed as a periodic sequence of discrete events, starting with heel contact during initial stance and ending with knee extension during late swing. This convention has informed the design of control strategies for powered prosthetic legs, which almost universally switch between multiple distinct controllers through the gait cycle based on a finite state machine. Human locomotion is further discretized into a small set of task-specific finite state machines, e.g., one for uphill and one for downhill. However, this discrete methodology cannot synchronize to the continuous motions of the user or adapt to the continuum of user activities. Instead of discretely representing human locomotion, this talk will present a continuous parameterization of human gait across measurable phase and task variables. Two studies with 10 able-bodied human subjects identify 1) a phase variable that robustly parameterizes knee and ankle patterns across perturbations to the gait cycle, and 2) task variables to parameterize kinematic adaptations to ground slope. A unifying prosthetic leg controller is then designed around this continuous parameterization of human gait to synchronize prosthetic joint patterns with the timing and activity of the human user. The viability of this approach is demonstrated by experiments with above-knee amputee subjects walking on a powered knee-ankle prosthesis at variable speeds and inclines. Applications in functional electric stimulation and powered orthoses for stroke gait will also be discussed.

Biography

Robert D. Gregg IV received the B.S. degree in electrical engineering and computer sciences from the University of California, Berkeley in 2006 and the M.S. and Ph.D. degrees in electrical and computer engineering from the University of Illinois at Urbana-Champaign in 2007 and 2010, respectively. He joined the Departments of Bioengineering and Mechanical Engineering at the University of Texas at Dallas (UTD) as an Assistant Professor in June 2013 with an adjunct appointment at the UT Southwestern Medical Center. Prior to joining UTD, he was a Research Scientist at the Rehabilitation Institute of Chicago and a Postdoctoral Fellow at Northwestern University. His research concerns the control mechanisms of bipedal locomotion with application to wearable control systems, including prostheses and orthoses. Dr. Gregg is a recipient of the NIH Director’s New Innovator Award and the Career Award at the Scientific Interface from the Burroughs Wellcome Fund. His work has been recognized with the Best Student Paper Award of the 2008 American Control Conference and the 2015 IEEE Conference on Decision & Control, the Best Technical Paper Award of the 2011 CLAWAR Conference, and the 2009 O. Hugo Schuck Award from the IFAC American Automatic Control Council. Dr. Gregg is a Senior Member of the IEEE Control Systems Society and the IEEE Robotics & Automation Society.

Faculty Contact: Dr. Roozbeh Jafari


The Bitcoin Backbone Protocol: Analysis and Applications

Juan Garay
Sr. Principal Research Scientist
Yahoo Research

4:10pm Monday, February 27, 2017
Room 124 HRBB

Abstract

As the first decentralized cryptocurrency, Bitcoin has ignited much excitement, not only for its novel realization of a central bank-free financial instrument, but also as an alternative approach to classical problems in cryptographic protocols and fault-tolerant distributed computing, such as reaching and maintaining agreement distributedly in the presence of misbehaving participants. Formally capturing such reach was the intent of my Eurocrypt '15 paper [GKL15], where we extract and analyze the core of the Bitcoin protocol, which we call the Bitcoin "backbone." 

In this talk I will present some fundamental properties of its underlying blockchain data structure (e.g.,  "common prefix," "chain quality," etc.),  which parties ("miners") maintain and try to extend by generating "proofs of work." I will then show how applications such as consensus and a robust public transaction ledger (i.e., Bitcoin) can be built  "on top" of these properties, assuming that the hashing power of an adversary controlling a fraction of the parties is strictly less than 1/2. Finally, I will mention some more recent results, including techniques for dispensing of an unpredictable "genesis" block, building them from "scratch," and the first formal analysis of Bitcon's difficulty (re)calculation function, as the miners' population evolves over time.

Biography

Juan Garay is currently a Sr. Principal Research Scientist at Yahoo Research. Previously, after receiving his PhD in Computer Science from Penn State, he was a postdoc at the Weizmann Institute of Science, and held research positions at the IBM T.J. Watson Research Center, Bell Labs, and AT&T Labs--Research. His research interests include both foundational and applied aspects of cryptography and information security. He has published extensively in the areas of cryptography, network security, distributed computing, and algorithms; has been involved in the design, analysis and implementation of a variety of secure systems; and is the recipient of over two dozen patents. He has served on the program committees of numerous conferences and international panels---including co-chairing Crypto 2013 and 2014, the discipline's premier conference.

Faculty Contact: Dr. Guofei Gu


Real-Time Reachability for Safety of Autonomous Systems

Taylor Johnson
Assistant Professor
Vanderbilt University

4:10pm Monday, March 6, 2017
Room 124 HRBB

Abstract

The Simplex Architecture ensures the safe use of an unverifiable, complex controller such as those arising in autonomous systems by executing it in conjunction with a formally verified safety controller and a formally verified supervisory controller. Simplex enables the safe use of high-performance, untrusted, and complex control algorithms without requiring complex controllers to be formally verified or certified. The supervisory controller should take over control from an unverified complex controller if it misbehaves and transfer control to a safety controller. The supervisory controller should (1) guarantee the system never enters an unsafe state (safety), but should also (2) use the complex controller as much as possible (minimize conservatism). The problem of precisely and correctly defining the supervisory controller has previously been considered either using a control-theoretic optimization approach (LMIs), or through an offline hybrid systems reachability computation. In this work, we show that a combined online/offline approach that uses aspects of the two earlier methods in conjunction with a real-time reachability computation also maintains safety, but with significantly less conservatism, allowing the complex controller to be used more frequently. We demonstrate the advantages of this unified approach on a saturated inverted pendulum, where the verifiable region of attraction is over twice as large compared to the earlier approach. We present results of embedded hardware studies using both ARM processors on Beaglebone Black and Atmel AVR (Arduino) microcontrollers. This is the first ever demonstration of a hybrid systems reachability computation in real-time on actual embedded platforms, and required addressing significant technical challenges. We will conclude with ongoing research on formally modeling and verifying CPS, including swarm robotics controlled with distributed algorithms, automotive CPS, aerospace CPS include groups of UAVs, and developing fundamental new modeling abstractions for designing CPS using extensions of Signal Temporal Logic (STL) done in conjunction with Toyota.

Biography

Taylor T. Johnson is an Assistant Professor of Electrical Engineering and Computer Science (EECS) at Vanderbilt University (since August 2016), where he directs the Verification and Validation for Intelligent and Trustworthy Autonomy Laboratory (VeriVITAL) and is a Senior Research Scientist in the Institute for Software Integrated Systems. Taylor was previously an Assistant Professor of Computer Science and Engineering (CSE) at the University of Texas at Arlington (September 2013 to August 2016). Taylor is a 2016 recipient of the AFOSR Young Investigator Research Program (YIP) award. Taylor earned a PhD in Electrical and Computer Engineering (ECE) at the University of Illinois at Urbana-Champaign in 2013, an MSc in ECE at Illinois in 2010, and a BSEE in ECE from Rice University in 2008. Taylor's research focus is developing formal verification techniques and software tools for cyber-physical systems (CPS) with goals of improving safety, reliability, and security. Taylor has published over two-dozen papers on these verification and validation methods and their applications across domain areas such as power and energy systems, aerospace, transportation systems, and robotics, two of which were recognized with best paper awards, from the IEEE and IFIP, respectively. Taylor gratefully acknowledges the support of his group's research by AFRL, AFOSR, ARO, NSF (CISE CCF/SHF, CNS/CPS; ENG ECCS/EPCN), NVIDIA, ONR, Toyota, and USDOT.

Faculty Contact: Dr. Dylan Shell


A Smart Design Framework for a Novel Reconfigurable Multi-processor Systems-on-Chip (ASREM) Architecture

Anandi Dutta
Lecturer
Texas A&M University

4:10pm Wednesday, March 8, 2017
Room 124 HRBB

Abstract

The design trend in hand-held devices involves heterogeneous computing with MPSoC (Multi-Processor Systems-on-Chip). Usually, modern commercial MPSoCs integrate multiple CPUs dedicated for OS-related work and control-based work, GPUs (Graphics Processing Unit) dedicated for high performance graphical processing, and DSPs (Digital Signal Processor) dedicated for signal processing. Assuring high performance in a resource-constraint and battery-operated platform is one of the main challenges in commercial MPSoC design. This research focuses on developing a smart framework for Reconfigurable Multi-Processor Systems-on-Chip (ASREM) design to accomplish a more personalized hand-held device. The objective of this work is to develop a smart framework which initiates this philosophy: the system would build a better system by itself. Moreover, the design targets to solve the MPSoC designing challenges such as low power consumption criteria and high performance requirement in a resource-constraint environment.

In this work, a smart reconfigurable Multi-Processor Systems-on-Chip (ASREM) has been introduced. The researcher examined a reconfigurable MPSoC architecture that incorporates one Processor-FPGA core to ensure flexibility and better design parameters. The system would learn about usage statistics by using an android application with the help of support vector machine (SVM) algorithm. Accomplishing this, the system would form a decision function from the SVM classifier to improve usability. According to user’s preference, it would re-design the reconfigurable MPSoC to ensure customized and superior user experience. A Reconfigurable MPSoC architecture and task scheduling mechanism has been introduced to enable the design. An image processing task has been performed as a case-study on a FPGA-SoC platform and GPU as a proof of concept to ensure current standards. The novel and competent approach of designing system for hand-held devices adopted in this study enables customization of the device after manufacturing.

Biography

Anandi Dutta is Lecturer at Texas A&M University. She received her PhD in Computer Engineering from University of Louisiana at Lafayette. She received her MS in Electrical Engineering from Louisiana State University.  Her research interest includes Reconfigurable MPSoC design (Multi-Processor Systems-on-Chip), FinFET SRAM design and Machine Learning Algorithms.

Faculty Contact: Dr. Nancy M. Amato


Current Trends in High-Performance Computing and Future Challenges

Eminent Scholar Lecture Series

Jack Dongarra
Distinguished Professor of Computer Science
University of Tennessee
Distinguished Research Staff
Computer Science and Mathematics Division
Oak Ridge National Laboratory
2014–15 Faculty Fellow

*Special time and location*
7:00PM, Tuesday, March 21, 2017

Bethancourt Ballroom, Memorial Student Center

Abstract

In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our numerical scientific software. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments.  Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder.

Biography

Jack Dongarra holds an appointment at the University of Tennessee, Oak Ridge National Laboratory, and the University of Manchester. He specializes in numerical algorithms in linear algebra, parallel computing, use of advanced-computer architectures, programming methodology, and tools for parallel computers. He was awarded the IEEE Sid Fernbach Award in 2004; in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing; in 2010 he was the first recipient of the SIAM Special Interest Group on Supercomputing's award for Career Achievement; in 2011 he was the recipient of the IEEE Charles Babbage Award; and in 2013 he received the ACM/IEEE Ken Kennedy Award. He is a Fellow of the AAAS, ACM, IEEE, and SIAM and a foreign member of the Russian Academy of Science and a member of the US National Academy of Engineering.

Contact: Dr. Nancy M. Amato


Cracking the Code on Style 

Thamar Solorio
Associate Professor
University of Houston

4:10pm Wednesday, March 29, 2017
Room 124 HRBB

Abstract

Style in written language refers to the way authors choose to structure text. It includes verbosity, which can be measured in terms of sentence length and/or paragraph length. Style also includes elements of syntax, for example to account for the use of passive versus active voice. Other relevant elements of style are word choice and rhythm. During this talk I will present recent results on extracting style to predict readers likeability of books. This approach provides an alternative to traditional book recommendation systems that look at user profiles and input from ‘similar readers’. I will show that a neural network architecture is a reasonable approach to solve this problem, albeit not the most competitive. The results from deep learning are nonetheless interesting, in particular since they seem to be able to represent literary genre.  The last part of my talk will provide an overview of related work on stylistic modeling of text.

Biography

Thamar Solorio is an Associate Professor in the Department of Computer Science at the University of Houston (UH). She is the founder and director of the Research in Text Understanding and Analysis of Language (RiTUAL) group at UH. Her main research interests include stylistic modeling of text, syntactic analysis of mixed language data, language assessment, and information extraction from health support groups. She has M.S. and PhD degrees in Computer Science from INAOE, Puebla, Mexico. The Department of Defense and the National Science Foundation currently fund her research program. She is the recipient of a CAREER award for her work in authorship analysis, and the 2014 Denice Denton Emerging Leaders ABIE Award. She serves as editorial board member for the Journal on Artificial Intelligence Research (JAIR) and the Computer Speech and Language Journal.

Faculty Contact: Dr. Ruihong Huang


3D Shadows: Casting light on the fourth dimension

Henry Segerman
Assistant Professor
Oklahoma State University

4:10pm Wednesday, April 5, 2017
Room 124 HRBB

Abstract

Our brains have evolved in a three-dimensional environment, and so we are very good at visualising two- and three-dimensional objects. But what about four-dimensional objects? The best we can really do is to look at three-dimensional "shadows". Just as a shadow of a three-dimensional object squishes it into the two-dimensional plane, we can squish a four-dimensional shape into three-dimensional space, where we can then make a sculpture of it. If the four-dimensional object isn't too complicated and we choose a good way to squish it, then we can get a very good sense of what it is like. We will explore the sphere in four-dimensional space, the four-dimensional polytopes (which are the four-dimensional versions of the three-dimensional polyhedra), and various 3D printed sculptures, puzzles, and virtual reality experiences that have come from thinking about these things. I talk about these topics and much more in my new book, "Visualizing Mathematics with 3D Printing".

Biography

Henry Segerman is currently an assistant professor in the Department of Mathematics at Oklahoma State University. He received his Ph.D. in Mathematics in 2007. His research is in Three-manifolds and triangulations, Hyperbolic geometry, Mathematical Visualization, 3D printing, and virtual and augmented reality.

Faculty Contact: Dr. Ergun Akleman


Functional Safety for Software-Defined Autonomous Platforms

Gus Espinosa
Senior Principal Engineer, Lead Architect for Functional Safety
Intel

4:10pm Monday, April 10, 2017
Room 124 HRBB

Abstract

The application of autonomous computing and software-defined infrastructure to the automotive, industrial, and embedded computing segments is enabling consolidation of computing in these domains onto high-performance platforms. This is driving significant new growth opportunities in computing but also new challenges.

Functional safety, which is concerned with the avoidance of unreasonable risk of injury to humans due to electrical/electronic systems failure, is a key requirement in a number of emerging areas that feature human/machine interaction. The integrating of high performance computing with high safety in these platforms poses unique challenges for development of the hardware and software components in these systems.  This talk will focus on what functional safety is, its relationship to autonomous platforms, and what the implications are for product and technology development for these platforms.

Biography

Gus Espinosa is Senior Principal Engineer and Lead Architect for Functional Safety with the Integrated IP & Technology Group at Intel. He has held a variety of technical and leadership roles in architecture and design. He has worked on many of Intel's major microprocessor designs since the 486 generation and served as Chief Architect of several Intel(r) Pentium(r) III and Intel(r) Pentium(r) 4 processors. In more recent roles, he led the team responsible for the architecture development of the Knights family processors, which were in the #1 system in the Top 500 list of supercomputers for 2013-2016, and he was responsible for developing the strategy for enabling Intel Architecture IP integration by external SOC companies using Intel's Custom Foundry service. In his current role, he is developing the strategy and architecture for support of functional safety - critical for automotive and industrial applications - across Intel's IP portfolio. His areas of expertise are in computer architecture, processor microarchitecture, and performance analysis. Gus holds a Bachelor of Science degree in Electrical Engineering from Cornell University and a Master of Science degree in Computer Engineering from Boston University.  He lives in Portland, Oregon with his wife and two daughters.

Faculty Contact: Dr. Aakash Tyagi


Big Data Visual Analytics

Klaus Mueller
Professor
Stony Brook University

4:10pm Wednesday, April 12, 2017
Room 124 HRBB

Abstract

The growth of digital data is tremendous. Any aspect of life and matter is being recorded and stored on cheap disks, either in the cloud, in businesses, or in research labs. We can now afford to explore very complex relationships with many variables playing a part. But for this we need powerful tools that allow us to be creative, to sculpt this intricate insight formulated as models from the raw block of data. High-quality visual feedback plays a decisive role here. In this talk I will discuss various platforms we have developed over the years to make the exploration of large multivariate data more intuitive and direct. These platforms were conceived in tight collaborations with domain experts in the fields of climate science, health informatics, and computer systems.

Biography

Klaus Mueller received a Ph.D. in computer science from the Ohio State University. He is currently a professor in the Computer Science Department at Stony Brook University and is also a senior adjunct scientist in the Computational Science Initiative at Brookhaven National Labs. His current research interests are visualization, visual analytics, data science, medical imaging, and high-performance computing, He won the US National Science Foundation CAREER award in 2001 and the SUNY Chancellor Award in 2011. Mueller has authored more than 170 peer-reviewed journal and conference papers, which have been cited more than 7,000 times. He is a frequent speaker at international conferences, has participated in numerous tutorials on various topics, and was until recently the chair of the IEEE Technical Committee on Visualization and Computer Graphics. He is also back on the editorial board of IEEE Transactions on Visualization and Computer Graphics and he is a senior member of the IEEE. 

Faculty Contact: Yoonsuck Choe


How to Keep your Secrets in a Post-Quantum World

Maxson Lecture
Kristin Lauter
Principal Researcher and Research Manager
Microsoft Research

4:00pm Wednesday, April 19, 2017
Room 149 Blocker

Abstract

This talk will give an overview of the history of various hard problems in number theory which are used as the basis for cryptosystems.  I will survey the evolution of attacks and discuss the upcoming NIST competition to standardize new cryptographic schemes for a post-quantum world.  I will present some current proposals for post-quantum systems based on supersingular isogeny graphs of elliptic curves and lattice-based cryptosystems in cyclotomic number fields and give the ideas behind some recent attacks.

Biography

Kristin Lauter is a Principal Researcher and Research Manager for the Cryptography group at Microsoft Research. She directs the group’s research activities in theoretical and applied cryptography and in the related math fields of number theory and algebraic geometry. Her personal research interests include algorithmic number theory, elliptic curve, pairing-based, and lattice-based cryptography, homomorphic encryption, and cloud security and privacy, including privacy for healthcare.

Lauter is currently serving as President of the Association for Women in Mathematics, and on the Council of the American Mathematical Society.   She was selected to be a Fellow of the American Mathematical Society in 2014. She is on the Editorial Board for the SIAM Journal on Applied Algebra and Geometry (SIAGA), Journal of Mathematical Cryptology, and International Journal of Information and Coding Theory. She was a co-founder of the Women In Numbers Network, a research collaboration community for women in number theory, and she serves on the Scientific Advisory Board for BIRS, the Banff International Research Station.   Lauter is also an Affiliate Professor in the Department of Mathematics at the University of Washington. She received her BA, MS, and Ph.D., all in mathematics, from the University of Chicago, in 1990, 1991, and 1996, respectively. She was T.H. Hildebrandt Assistant Professor of Mathematics at the University of Michigan (1996-1999), and a Visiting Scholar at Max Planck Institut fur Mathematik in Bonn, Germany (1997), and at Institut de Mathematiques Luminy in France (1999). In 2008, Lauter, together with her coauthors, was awarded the Selfridge Prize in Computational Number Theory.


Toward a Theory of Automated Design of Minimal Robots

Jason O'Kane
Associate Professor
University of South Carolina

4:10pm Monday, April 24, 2017
Room 124 HRBB

Abstract

The design of an effective autonomous robot relies upon a complex web of interactions and tradeoffs between various hardware and software components. The problem of designing such a robot becomes even more challenging when the objective is to find robot designs that are minimal, in the sense of utilizing only limited sensing, actuation, or computational resources. The usual approach to navigating these tradeoffs is currently by careful analysis and human cleverness. In contrast, this talk will present some recent research that seeks to automate some parts of this process, by representing models for a robot's interaction with the world as formal, algorithmically-manipulable objects, and posing various kinds of questions on those data structures.  The results include both both bad news (i.e., hardness results) and good news (practical algorithms).

Biography

Jason O'Kane is Associate Professor in Computer Science and Engineering and Director of the Center for Computational Robotics at the University of South Carolina. He holds the Ph.D. (2007) and M.S. (2005) degrees from the University of Illinois at Urbana-Champaign and the B.S. (2001) degree from Taylor University, all in Computer Science. He has won a CAREER Award from NSF, a Breakthrough Star Award from the University of South Carolina, and the Outstanding Graduate in Computer Science Award from Taylor University. He was a member of the DARPA Computer Science Study Group.  His research spans algorithmic robotics, planning under uncertainty, and computational geometry.

Faculty Contact: Dylan Shell


Fall 2016 Abstracts

Polyhedra, Polynomials, and Proximity of Numerical Solutions

J. Maurice Rojas
Professor of Math and Computer Science and Engineering
Texas A&M University

4:10pm Wednesday, September 7, 2016
Room 124 HRBB

Abstract

We explore some simple and useful ways one can use computational geometry to help approximate solutions of systems of equations. (Real solutions to polynomial systems come up in geometric modelling, and complex solutions come up in numerous applications of differential equations.) Whereas classical algorithms for numerical solving take exponential-time, we describe in simplest possible terms, new techniques that allow one to attain polynomial-time in certain settings.

While our techniques rely on variants of tropical geometry, we describe everything from scratch. Some of the results presented are joint with Martin Avendano, Alperen Ergur, Mounir Nisse,and Grigoris Paouris.

Biography

J. Maurice Rojas is a mathematician working at the intersection of complexity theory, algebraic geometry, and number theory. He is currently a full professor in the mathematics department and (by courtesy appointment) the computer science and engineering department at Texas A&M. He  obtained his applied mathematics Ph.D. from UC Berkeley in 1995 (under the guidance of Fields Medalist Steve Smale) and his computer science M.S. in 1991  (under the guidance of John Canny). He has held visiting positions at TU Munich, ENS Lyon, John Hopkins University, MSRI, IMA, Sandia National Laboratories, MIT, and CCR. Rojas was von Neumann visiting professor at TU Munchen last year, a winner of the 2013 ISSAC distinguished paper award for his work on sparse polynomials over finite fields (joint with J. Bi and Q. Cheng) and, earlier in his career, he was an NSF CAREER Fellow and an NSF Postdoc. He has also run a successful NSF sponsored REU on algorithmic algebraic geometry over the past 12 years.

Faculty Contact: Dr. Nancy M. Amato


A Novel Algorithm for Estimating Camera Pose and Target Structure Using Quaternions

Nick Gans
Assistant Professor
University of Texas at Dallas

4:10pm Monday, September 12, 2016
Room 124 HRBB

Abstract

Humans intuitively use visual information to estimate their motion and the structure of their environment.  Computers algorithms to perform similar estimation with camera images have existed for decades and find application in computer vision, robot control, vehicle guidance, and more.  Despite the history of these algorithms, there are still open problems and weaknesses.  

We present a novel algorithm to estimate the rotation and translation between two camera views from matched feature points. Our approach is immune to a variety of problems that plague existing methods. Methods based on the Euclidean Homography matrix only function when all points are coplanar in 3D space.  Methods based on the Essential matrix become degenerate as the translation between two camera views goes to zero. By formulating the problem using quaternions, the rotation and translation are recovered independently, and the algorithm eschews the shortcomings of the existing methods. We do not impose any constraints on the 3D configuration of the points (such as coplanar or non-coplanar constraints). Investigations using both simulations and experiments have validated the new method and verified that the algorithm can be used in practical context. Noise and time comparison between the proposed algorithm and the existing algorithms establishes the merits of this new algorithm. Source code of this algorithm is available to public and can be accessed online.

Biography

Nicholas Gans is a professor in Electrical Engineering at The University of Texas at Dallas.  His research interests include nonlinear and adaptive control, machine vision, robotics and autonomous vehicles. Current research includes self-optimizing mobile systems to maximize sensor information and distributed control of multi-robot systems.  Dr. Gans has published over 90 peer-reviewed conference and journal papers, and he holds three patents.

Dr. Gans earned his BS in electrical engineering from Case Western Reserve University in 1999, then his M.S. in electrical and computer engineering in 2002 and his Ph.D. in systems and entrepreneurial engineering from the University of Illinois Urbana-Champaign in 2005. He was a postdoctoral researcher at the University of Florida and as a postdoctoral associate with the National Research Council, where he conducted research on control of autonomous aircraft for the Air Force Research Laboratory and developed the Visualization Laboratory for simulation of vision-based control systems.  He is a Senior Member of the IEEE and the IEEE Robotics and Automation Society and Control Systems Society.

Faculty Contact: Dr. Roozbeh Jafari


Semantic Design Software for Computational Fabrication

Shinjiro Sueda
Assistant Professor
Texas A&M University 

4:10pm Monday, September 19, 2016
Room 124 HRBB

Abstract

Even though common users are now able to 3D print customized physical objects from scratch, most users do not have the expertise to create interesting designs, instead relying on models available on the web. We need new software tools that allow non-specialists to take advantage of the the advances in 3D printing hardware. I will describe some examples of algorithmic design tools that makes this possible. With these tools, any user can create, for example: geared linkage mechanisms that produces a prescribed motion; a shape that can be folded into a box; and a connector for two physical objects given their geometries. I will conclude by discussing the underlying principle behind these tools and avenues for future work.
Biography
Shinjiro Sueda is an assistant professor of computer science at Texas A&M University. Prior to this appointment, he was an assistant professor at California Polytechnic State University after completing a post-doctoral fellowship at Disney Research Boston and MIT. He received his Ph.D. from the University of British Columbia. His main research area is computer graphics and animation, specializing in physically based animation, biomechanical simulations, and
computational fabrication. 
Faculty Contact: Dr. John Keyser

Human-Machine Collaborative Optimization via Apprenticeship Scheduling

Matthew Gombolay
Ph.D. Candidate
Massachusetts Institute of Technology

4:10pm Monday, September 26, 2016
Room 124 HRBB

Abstract

Health care, manufacturing, and military operations require the careful choreography of resources — people, robots, and machinery — to effectively fulfill the responsibilities of the profession. Poor resource utilization has been shown to have drastic health, safety, and economic consequences. However, coordinating a heterogeneous team of agents to complete a set of tasks related through upper- and lower-bound temporal constraints is NP-Hard.

The process of manually codifying domain expert knowledge and crafting effective computer algorithms leaves much to be desired. Yet, there is hope. In practice, we know there are a rare breed of human experts who effectively reason about complex resource optimization problems every day. The question then becomes: How can we autonomously learn the rules-of-thumb and heuristics from these domain experts to support real-time decision-support and autonomously coordinate human-robot teams?

In this talk, I will present a novel computational technique, known as Apprenticeship Scheduling, which 1) learns the heuristics and implicit rules-of-thumb developed by domain experts from years of experience, 2) embeds and leverages this knowledge within a scalable resource optimization framework, and 3) provides decision support in a way that engages users and benefits them in their decision-making process. By intelligently leveraging the ability of humans to learn heuristics and the speed of modern computation, we can improve the ability to coordinate resources in these time- and safety-critical domains.

Biography

Matthew Gombolay is a PhD candidate and NSF Graduate Research Fellow in the Interactive Robotics Group at the Massachusetts Institute of Technology (MIT). He received his S.M. (2013) from the department of Aeronautics and Astronautics at MIT and his B.S (2011) from the department of Mechanical Engineering at Johns Hopkins University. Matthew studies the interaction of humans and automation and is developing computational methods for real-time and collaborative resource optimization. Matthew focuses on harnessing the strengths of human domain experts and sophisticated computational techniques to form collaborative human-machine teams for manufacturing, healthcare, and military operations. Matthew has worked for MIT Lincoln Laboratory and the Johns Hopkins University Applied Physics Laboratory developing cutting-edge planning and scheduling algorithms for ballistic and anti-ship missile defense with the US Navy and Missile Defense Agency. Matthew has received a Best Technical Paper Award from the AIAA Intelligent Systems Committee, and his work has been highlighted in media outlets such as PBS, NBC, Harvard Business Review, and public radio.

Faculty Contact: Dr. Dylan Shell


Advanced Analytics Impact on the Way We Do Business in the Oil & Gas Industry

Diana Ramberansingh
Manager, Enterprise IT Services
ConocoPhillips

Scott Brice
Integration Analyst
ConocoPhillips

4:10pm Monday, October 3, 2016
Room 124 HRBB

Abstract

ConocoPhillips is putting big data analytics power closer to the front-line decision makers at the $56 billion oil and gas company. Business units and functions across ConocoPhillips are exploring the use of advanced analytics and forms of artificial intelligence such as machine learning and neural networks, and IT Manager Diana Ramberansingh will discuss how the IT organization is building a platform to enable them. Advanced analytics could bring dramatically reduced drilling costs, higher predictability of production volumes before drilling, more efficient and effective well completions, and better decisions through the asset lifecycle. The company also is applying analytics to prevent well downtime, predict equipment failures, and optimize production processes. But not all of the data needed to solve certain problems exists today, so IT teams are looking for ways to create the critical data and get it back to where it can be used for analysis. Ramberansingh will explore this and other challenges the company is overcoming to transform a global business through data-driven decisions.

Biographies

Ramberansingh is manager of Enterprise IT Services at ConocoPhillips. She has responsibility for all corporate IT business partner organizations as well as the global Applications Development service, and she participates as an active member of the IT Leadership Team.

Ramberansingh graduated from Texas A&M University in 1989 with a Bachelor of Science in Chemical Engineering. She is a member of API (American Petroleum Institute), AITP (Association of Information Technology Professionals), AWC (Association for Women in Computing), and WIT (Women in Technology).

Scott is an integration analyst for the Information Integration Center of Excellence at ConocoPhillips. In his current role, Scott led an enhancement effort of a custom .NET web application used to maintain information about integration solutions within ConocoPhillips. He has worked with various groups to deliver integration solutions using technologies such as Teradata, Hadoop, and Spotfire.

Scott graduated from Texas A&M University in 2013 with a Bachelor of Science in Computer Engineering and minors in Business and Mathematics. 

Faculty Contact: Dr. Lawrence Rauchwerger


Making in the Classroom: The Rationale, The Challenge, and The Imperative

Francis Quek
Professor
Texas A&M University

4:10pm Wednesday, October 5, 2016
Room 124 HRBB

Abstract

Computing is increasingly focused in interaction with the physical world rather than just in the abstract virtual world of screens and pixels. Physical computing combines the design of physical electronics with computation to bring about possibilities that simply interacting with pixels behind glass cannot. One manifestation of physical computing in our culture is seen in the Maker movement. Technologies such as 3D printing and open source electronics and accessible computing have combined to give rise to a Maker movement that promises to broaden participation in technology-based innovation and production. The potential of Making to enhance learning, especially in areas of Science, Technology, Engineering and Mathematics (STEM) has led to calls to bring Making into education. However, the characteristics of innovation, discovery, and student-directed learning for which Making is prized is not easily incorporated into public school learning. Making-based learning are thus often provided in clubs, community Makerspaces, and workshops. This poses a severe issue of equity as youth participants are implicitly self-selected through parents who have the knowledge and means to enroll their children at such venues.  Taking a human-centered perspective, we present a project where Making is integrated with the formal curriculum of a public elementary school that serves predominantly underrepresented populations. We will examine the rationale for employing Making-based classroom learning and  review our strategy for curriculum alignment. We will see  how our 'double scaffolding' approach supports both learning of STEM curricula and knowledge and skills associated with computing and Making. Beside learning STEM material, our approach seeks to support the development of STEM self-efficacy and self-identities in children who may not otherwise see these possibilities in themselves. We present results of our year-long study that show the promise of our approach.

Biography

Francis Quek is a Professor of the Department of Visualization (and by courtesy, Professor of Computer Science and Engineering and Professor of Psychology) at Texas A&M University. He joined Texas A&M University as an interdisciplinary President’s Signature Hire to bridge disparities in STEM. Formerly he has been the Director of the Center for Human-Computer Interaction at Virginia Tech.  Francis received both his B.S.E. summa cum laude (1984) and M.S.E. (1984) in electrical engineering from the University of Michigan.  He completed his Ph.D. in Computer Science at the same university in 1990. Francis is a member of the IEEE and ACM.

He performs research in Making for STEM learning, embodied interaction, embodied learning and sensemaking, multimodal verbal/non-verbal interaction, multimodal meeting analysis, interfaces to support learning, vision-based interaction, multimedia databases, medical imaging, assistive technology for the blind, human computer interaction, computer vision, and computer graphics.   He leads several multiple-disciplinary research efforts to understand the communicative realities of multimodal interaction.  


Understanding the Geometry of Stable Masonry: Reconciling the Elastic and Equilibrium Schools of Analysis

Etienne Vouga
Assistant Professor
University of Texas at Austin

4:10pm Monday, October 10, 2016
Room 124 HRBB

Abstract

The Aqueduct of Segovia was built by the Romans 2000 years ago, and remains one of the most prominent Roman monuments in Spain. Yet in the mid 90s, engineers studied the stability of the aqueduct using finite element analysis, and could not conclusively demonstrate that the aqueduct should even be standing.

I will discuss the two schools of analysis that are commonly used to study masonry structures: linear elastic analysis using finite elements, and the geometry-based equilibrium methods that have received renewed attention in computer graphics in recent years. I will explore the common claim that equilibrium methods outperform finite elements for masonry stability — is this even true? Why does FEM give the wrong answer for the aqueduct? What is the cause of the discrepancy? — and show that the two schools can be reconciled.

Biography

Etienne Vouga is an Assistant Professor of Computer Science at the University of Texas at Austin. He received his PhD at Columbia University in 2013 under Eitan Grinpsun and spent a year as an NSF Mathematical Sciences Postdoctoral Fellow at Harvard, working with L. Mahadevan.

His research interests include physical simulation, geometry processing, and discrete differential geometry, with applications to computer graphics and computational mechanics. Special effects studios Disney and Weta Digital have used his work on cloth and hair simulation in movies such as Tangled and The Hobbit.

Faculty Contact: Dr. Scott Schaefer


Formally Verified Libraries as a “metric system” to Help Ensure Reliable and Secure Computer Systems

Flemming Andersen
Intel Corporation

4:10pm Wednesday, October 12, 2016
Room 124 HRBB

Abstract

Over the past twenty years the hardware industry gained a lot of experience with practical application of formal methods- and verification. From originally being able to formally verify only RTL components with hundreds of gates, the tools today support millions of gates. In addition, industry is experiencing that a combination of functional equivalence verification techniques combined with simulation and emulation validation is required to help ensure reliable and secure computer systems.

The most successful application areas have naturally been those like arithmetic, decoders/encoders, protocols, RAS/ECC, and other areas where solid standards and requirements are present. The IEEE 754 floating point standard is a good example in that it rigorously defines what the formal specifications and the computer implementations should be such that incompatible functionality has been mostly avoided since the costly Intel floating-point divider bug in 1994.

Areas where no standards are given, keep on being a challenge for formal verification since the investment in specifications are often more or less useless when the next product is being developed. 

Hence, we propose a “metric system” based on using formally verified standard libraries as reference for the validation of IP-components at all levels to help ensure correct, reliable, and secure system behavior. In this presentation we will look at different successful formal verification areas and discuss how this experience can be used as foundation to enable a future metric system for certification of computer systems.

Biography

Flemming Andersen recently retired from Intel where he was a principal engineer and formal verification manager. He joined the company in 2000 where he hired, developed, and managed the formal verification team at the Texas Austin site. In 2005 he was offered the opportunity to work at the Intel Research Labs in the Trusted Platform Lab (TPL) in Oregon to hire, manage, and develop a team investigating formal methods for security. When TPL was closed end of 2006 as part of a cross company cost cutting, Flemming started to work in the Server Development Group that implements the Many Integrated Core (SDG/MIC) server processors known as the Xeon-Phi processors that are used in Intel processor based super computers. One of these is the Milky Way Supercomputer which is currently the fastest supercomputer in the world. 

In SDG/MIC Flemming owned and managed the formal verification (FV) of the RTL in the Xeon-Phi processors. The main focus was on arithmetic verification since Intel never wants to encounter a new FDIV bug like the one that cost the company almost $500 million in 1994. But the FV-team also verified the correct functionality of RAS/ECC and other critical areas that benefits such as modeling cache coherence protocols. The last several years at Intel he was working on developing formal methods to bridge the gap between formal verification and simulation based validation techniques.

Before coming to USA, Flemming managed a team of 19 researchers in TeleDanmark R&D where he was responsible for developing new Internet services. His team implemented the Yellow Pages in Denmark and a VOIP solution like an early version of Skype already in 1996. He originally started as a research scientist at TeleDanmark Research (TDR) where he participated in the implementation of compilers and developed formal verification tools for concurrency. During his employment at TDR, he had two years leave of absence working as a guest scientist at the IBM Science Center in Heidelberg/Germany where he did research in databases and helped IBM develop a new hierarchical database query language that contributed to the definition SQL2.

Flemming has a Ph.D. in computer science as well as an M.Sc.EE from the Technical University of Denmark. His Ph.D. work on formal verification of concurrent systems using the UNITY theory led to an invitation to work with Professor Mani Chandy at Caltech and later collaboration with Professor Jayadev Misra at the University of Texas at Austin. Flemming has six granted US patents and 31 publications of which more than 15 have been presented at conferences. Before coming to USA Flemming served on review committees and conference panels, was EU-reviewer, served on program committees, and is currently a member of IEEE and ACM.

Faculty Contact: Dr. Aakash Tyagi


With Extreme Scale Computing the Rules Have Changed

Jack Dongarra
University Distinguished Professor, TIAS Fellow
University of Tennessee

4:10pm Monday, October 17, 2016
Room 124 HRBB

Abstract

In this talk we will look at the current state of high performance computing and look at the next stage of extreme computing. With extreme computing there will be fundamental changes in the character of floating point arithmetic and data movement. In this talk we will look at how extreme scale computing has caused algorithm and software developers to changed their way of thinking on how to implement and program certain applications.

Biography

Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Computer Science Department at the University of Tennessee and holds the title of Distinguished Research Staff in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow at Manchester University, and an Adjunct Professor in the Computer Science Department at Rice
University. He is the director of the Innovative Computing Laboratory at the University of Tennessee. He is also the director of the Center for Information Technology Research at the University of Tennessee which coordinates and facilitates IT research efforts at the University.

He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches; in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing; in 2010 he was the first recipient of the SIAM Special Interest Group on Supercomputing's award for Career Achievement; in 2011 he was the recipient of the IEEE IPDPS Charles Babbage Award; and in 2013 he was the recipient of the ACM/IEEE Ken Kennedy Award for his leadership in designing and promoting standards for mathematical software used to solve numerical problems common to high performance computing. He is a Fellow of the AAAS, ACM, IEEE, and SIAM and a member of the National Academy of Engineering.

Faculty Contact: Dr. Lawrence Rauchwerger


Data Reduction in The Era of Big Data: Challenges and Opportunities

Hong Jiang
Wendell H. Nedderman Endowed Professor and Chair of Computer Science & Engineering Department
University of Texas at Arlington

4:10pm Monday, October 31, 2016
Room 124

Abstract

We are living a rapidly changing digital world where we are inundated by an ocean of data that is being generated at a rate of 2.5 billion GB every single day! It is projected that by 2017 humans will have generated 16 trillion GB digital data. This phenomenal growth and ubiquity of data has ushered in an era of “Big Data”, which brings with it new challenges as well as opportunities. In this talk, I will first discuss big data challenges facing and opportunities for computer and storage systems research, with an emphasis on challenges and research questions brought on by the almost unfathomable volumes of data, namely, research questions dealing with data reduction in face of big data. I will then present some recent solutions proposed by my research group that seek to address the performance and space issues facing big data and data-intensive applications by means of data reduction. 

Biography

Hong Jiang received the B.Sc. degree in Computer Engineering in 1982 from Huazhong University of Science and Technology, Wuhan, China; the M.A.Sc. degree in Computer Engineering in 1987 from the University of Toronto, Toronto, Canada; and the PhD degree in Computer Science in 1991 from the Texas A&M University, College Station, Texas, USA. He is currently Chair and Wendell H. Nedderman Endowed Professor of Computer Science and Engineering Department at the University of Texas at Arlington. Prior to joining UTA, he served as a Program Director at National Science Foundation (2013.1-2015.8) and he was at University of Nebraska-Lincoln since 1991, where he was Willa Cather Professor of Computer Science and Engineering. His present research interests include computer architecture, computer storage systems and parallel I/O, high-performance computing, big data computing, cloud computing, performance evaluation. He has graduated 16 Ph.D. students who now work in either major IT companies or academia. He has over 200 publications in major journals and international Conferences in these areas. Dr. Jiang is a Fellow of IEEE, and Member of ACM.

Faculty Contact: Dr. Nancy M. Amato


What does the operating system ever do for me? - Systems Challenges in Graph Analytics

Tim Harris
Architect
Oracle Labs

4:10pm Monday, November 7, 2016
Room 124 HRBB

Abstract

Graphs are at the core of many data processing problems, whether that is searching through billions of records for suspicious interactions, ranking the importance of web pages based on their connectivity, or identifying possible “missing” friends on a social network. Using these workloads as examples, I will describe challenges in building efficient parallel implementations, and the ways in which the operating system is no longer providing effective abstractions of the underlying computer hardware. I will show how obtaining good performance and scalability requires careful control over the placement of computation and storage within a system, and an understanding of the structure of the data being processed. I will then talk about how I see the role of the operating system evolving in distributed “rack scale” systems.

Biography

Tim Harris leads the Oracle Labs group in Cambridge, UK.  His research interests span multiple layers of the stack, including parallel programming, VMM / OS / runtime-system interaction, and opportunities for specialized architecture support for particular workloads. He has also worked on the implementation of software transactional memory for multi-core computers, and the design of programming language features based on it.  He is a co-author of the Morgan Claypool book Transactional Memory.

Tim has a BA and PhD in computer science from Cambridge University Computer Laboratory.  He was on the faculty at the Computer Laboratory from 2000-2004 where he led the department's research on concurrent data structures and contributed to the Xen virtual machine monitor project.  He was at Microsoft Research from 2004, and then joined Oracle Labs in 2012.

Faculty Contact: Dr. Lawrence Rauchwerger


VGAN: Generative Adversarial Networks as Variational Training of Energy Based Models

Yu Cheng
Research Staff Member
Thomas J. Watson Research Center, IBM Research

4:10pm Wednesday, November 9, 2016
Room 124 HRBB

Abstract

In this work, we tried to identify causes to the difficulty of training GANs: the missing of an entropy term of the generator distribution, the formulation of the energy function and the optimization procedure. We accordingly propose the solution - VGAN, a novel deep generative model. We show that VGAN corresponds to minimizing a variational lower bound of the negative log likelihood of an energy based model (EBM), where P(x;P) is approximated by another distribution Q(x; Q) that is easy to sample from. The training of VGAN takes a two step procedure: given P(x; P), Q(x; Q) is optimized to approximate P(x; P) which tightens the lower bound; P(x; P) is then updated one step with samples drawn from Q(x; Q). These experiments demonstrate VGAN framework to be more robust with respect to the choice of parameters and architecture and can generate reasonably good samples over several datasets.

Biography

Yu Cheng is a Research Staff Member at IBM T.J. Watson Research Center. He obtained his PhD from the computer science department, Northwestern University in 2015 and bachelor degree in 2010 from Tsinghua University. His research is about deep learning in general, with specific interests in unsupervised learning and compressing deep networks, and its applications in computer vision, data mining and health informatics. His work has been published in many on top conferences, including ICML, NIPS, CVPR, ICCV and KDD.

Faculty Contact: Dr. Xia (Ben) Hu


Public Safety Informatics

Yasser Morgan
Associate Professor
University of Regina

4:10pm Monday, November 14, 2016
Room 124 HRBB

Abstract

Public safety informatics is represented by a growing number of technologies focused on exploring novel approaches to enhance citizens' safety through the use of information theory, computational science, and data communications. Public safety informatics is characterized by the need to employ collaborative efforts to investigate the impact of explored technologies on human, cultural, and socio-economical values of the small and wide communities. While public safety informatics is a fairly new term, the evolution of technologies that build up the complex communications infrastructures, digital forensic, and security approaches go back decades.

In this 50 minutes presentation, Morgan, the founder of BRiC Laboratories in Canada, presents his experience in developing safety technologies from the early days of document security to the latest trends in using deep learning. 

Biography

Morgan is a known researcher with diverse interests and over 40 articles published in top international journals and conference proceedings. Morgan is an active member in the IEEE. He received his Ph.D. from Carleton University and is currently teaching at the University of Regina. His research interests span areas of intelligent data networks, vehicular communications, digital security, deep learning, object detection, and intrusion detection. During the late 1990 Morgan built a system of document scanning and document security with a small start-up called AIT. The system was adopted to protect the Canadian passport and moved to become the DeFacto standard for state travel documents. Morgan is also known for his contributions to the IEEE 802.11p standards group and the IEEE P1609.0/.1/.2/.3/.4/.5 WAVE groups as well. He authored and co-authored many research papers in addition to contributions to standards like 3GPP, IETF, IEEE, and IEEE-SA. Morgan continuous contribution to the standards and the development of the modern vehicular safety methods is aligned with his interest in secure intelligent communications. Recently, Morgan founded the BRiC Laboratories to investigate, research, and build novel solutions for modern public safety challenges.

Faculty Contact: Dr. Robin Murphy


Sampling-Based Motion Planning: From Intelligent CAD to Crowd Simulation to Protein Folding

Nancy M. Amato
Regents Professor and Unocal Professor
Texas A&M University

4:10pm Monday, November 21, 2016
Room 124 HRBB

Abstract

Motion planning has application in many domains such as robotics, animation, virtual prototyping and training, and even protein folding and drug design.  Surprisingly, sampling-based planning methods  have proven effective on problems from all these domains.  In this talk, we describe sampling-based planning and give an overview of some variants developed in our group.  We describe in more detail our work related to virtual prototyping, crowd simulation, and protein folding. For virtual prototyping, we show that in some cases a hybrid system incorporating both an automatic planner and haptic user input leads to superior results.  For crowd simulation, we describe techniques for evacuation planning and for evaluating architectural designs.  Finally, we describe our application of sampling-based motion planners to simulate molecular motions, such as protein and RNA folding.

Biography

Nancy M. Amato is Regents Professor and Unocal Professor of Computer Science and Engineering at Texas A&M University where she co-directs the Parasol Lab. Her main areas of research focus are motion planning and robotics, computational biology and geometry, and parallel and distributed computing. Amato received undergraduate degrees in Mathematical Sciences and Economics from Stanford University, and M.S. and Ph.D. degrees in Computer Science from UC Berkeley and the University of Illinois, respectively.  She was program chair for the 2015 IEEE Intern. Conference on Robotics and Automation (ICRA) and for Robotics: Science and Systems (RSS) in 2016. She is an elected member of the CRA Board of Directors (2014-2017), is co-Chair of CRA-W (2014-2017), and was co-chair of the NCWIT Academic Alliance (2009-2011).  She received the 2014 CRA Haberman Award, the inaugural NCWIT Harrold/Notkin Research and Graduate Mentoring Award in 2014, the 2013 IEEE HP/Harriet Rigas Award, and a Texas A&M AFS university-level teaching award in 2011.  She received an NSF CAREER Award and is a AAAS Fellow, an ACM Fellow and an IEEE Fellow.


 

Computer Science in Chemistry: from Matrices to Tensors and Beyond

Devin Matthews
Researcher, The Institute for Computational Engineering and Sciences (ICES)
University of Texas at Austin

4:10pm Monday, November 28, 2016
Room 124 HRBB

Abstract

Computation in quantum chemistry (QC) has become a cornerstone of modern chemical research. The operations encountered in quantum chemistry are overwhelmingly linear-algebra based, so it is no surprise that the Basic Linear Algebra Subprograms (BLAS) form the basic computational kernels in virtually all quantum chemistry packages. However, the computational aspects of QC are actually much richer than "normal" dense linear algebra, incorporating tensors (multi-dimensional arrays), complicated permutational symmetries, both structured and unstructured sparsity patterns, and non-linear systems of equations. In this talk, I present two efforts to bridge computer science and computational chemistry beyond the BLAS: the Tensor-Based Library Instantiation Software (TBLIS) and the Aquarius quantum chemistry package, which leverages the Cyclops Tensor Framework (CTF).

Biography

Devin Matthews is the Arnold O. Beckman Postdoctoral Fellow in the Institute for Computational Engineering and Sciences at the University of Texas at Austin. His interests include high-accuracy quantum chemistry and tensor algorithms. He received his Ph.D. in Chemistry at UT Austin as a DOE Computational Science Graduate Fellow and received the Howes Scholar award for his work on massively-parallel quantum chemistry algorithms in the newly-developed Aquarius program.

Faculty Contact: Dr. Tim Davis