Skip To Main Content

Fall 2017 Abstracts

Graduate Orientation II: Presentation, Poster Session, & PIZZA!


MANDATORY FOR NEW GRAD STUDENTS and counts towards requirement for CSCE 681 students.

4:10-6:00 p.m., Wednesday August 30, 2017
Room 124, Bright Building

Abstract

  • 4:10-5:10 p.m. - Presentation
  • 5:10-6:00 p.m. - Pizza & Current Student Poster Session - new students can meet current grads and learn about ongoing research projects.

Graduate Orientation I: Overview of Department Resources & Contacts, Honor Code, and Student Organizations


MANDATORY FOR NEW GRAD STUDENTS (but not other CSCE 681 students)

4:10-6:00 p.m., Monday September 4, 2017
Room 124, Bright Building

Abstract

This meeting will concentrate on the essentials the students will need to settle in. It will include an introduction to departmental administration (staff who's who, payroll, mailboxes, phones), the computing resources (computer use/accounts, printer quotas, lab access/tours), the academic advising staff and resources, the TAMU honor system, TAMU Libraries, Graduate Teaching Academy, Student Engineers' Council and relevant student organizations (CSEGSA, AWICS, TACS (TAMU ACM and IEEE student chapter), UPE and TAGD).


Systems for Clinical Outcomes Predictions

Bobak Mortazavi
Assistant Professor
Texas A&M University

4:10pm Wednesday, September 6, 2017
HRBB 124

Abstract

The design of personal medical embedded systems for user-centric health monitoring involves an understanding of platform development for data collection, applications of machine learning for processing vast quantities of varying data, and an understanding of the underlying clinical questions these systems are trying to address. The interdisciplinary nature of these tasks requires an understanding of the clinical issues being addressed, and then developing specific systems and algorithms to address these, along with the unique challenges posed by each individual application. This talk focuses on an understanding of clinical data, the challenges posed by implementing machine learning techniques, understanding the differences between methods used in clinical outcomes predictions and those available to computer scientists, and then examines several open-ended case studies that have the potential for both algorithmic and embedded systems improvements.

Biography

Jack Bobak Mortazavi is an Assistant Professor in Computer Science and Engineering at Texas A&M. After receiving his bachelors degree from the University of California Berkeley, he earned his Ph.D. in Computer Science from the University of California Los Angeles, where he focused on the development of embedded systems for the Wireless Health Institute. Most recently, he was a postdoctoral associate, and then instructor, in the department of internal medicine, section of cardiology, at the Yale School of Medicine. He has recently focused on clinical research challenges in predictive models and comparative effectiveness techniques, in order to better address the challenges of personalized health monitoring, in order to develop personalized remote systems for clinical outcomes.

Faculty Contact: Dr. Dilma Da Silva


Bio-behavioral signals and systems: From signal representations to novel health applications

Theodora Chaspari
Assistant Professor
Texas A&M University

4:10pm Wednesday, September 13, 2017
Room 124 HRBB

Abstract

Bio-behavioral signal processing and systems modeling enable an integrated computational approach to the study of human behavior and human physical and mental well-being through overt behavioral signals information and covert biomarkers. Recent converging advances in sensing and computing, including wearable technologies, allow the unobtrusive long-term tracking of individuals yielding rich multimodal signal measurements from real-life. In this talk, we will present the development of data-scientific and context-rich bio-behavioral approaches for analyzing, quantifying, and interpreting these bio-behavioral signals. The first part of the talk will describe a novel knowledge-driven signal representation framework able to efficiently handle the large volume of acquired data and the noisy signal measurements. Our approach involves the use of sparse approximation techniques and the design of signal-specific dictionaries learned through Bayesian methods, outperforming previously proposed models in terms of signal reconstruction and information retrieval criteria. The second part will focus on translating the derived signal representations into novel intuitive quantitative measures analyzed with probabilistic and statistical models in relation to external factors of observable behavior. This work has found applications in Autism intervention for detecting beneficial regulation mechanisms during child-therapist interactions, as well as in the family studies domain for identifying instances of emotional escalation and interpersonal conflict. The final part of the talk will discuss how the results from this analysis can be employed toward designing human-assistive personalized bio-feedback systems able to promote healthy routines, increase emotional wellness and awareness, and revolutionize clinical assessment and intervention.

Biography

Theodora Chaspari is an Assistant Professor at the Computer Science & Engineering Department in Texas A&M University. She has received the diploma (2010) in Electrical and Computer Engineering from the National Technical University of Athens, Greece and the Master of Science (2012) and Ph.D. (2017) in Electrical Engineering from the University of Southern California. Since 2010 she is working as a Research Assistant at the Signal Analysis and Interpretation Laboratory at USC. She has also been a Lab Associate Intern at Disney Research (summer 2015). Dr Chaspari’s research interests lie in the areas of biomedical signal processing, human-computer interaction, behavioral signal processing, data science, and machine learning. She is a recipient of the USC Annenberg Graduate Fellowship, USC Women in Science and Engineering Merit Fellowship, and the IEEE Signal Processing Society Travel Grant.

Faculty Contact: Dr. Dilma Da Silva


Interactive Modeling Techniques for Trees and Mechanisms

Zhigang Deng
Professor
University of Houston

4:10pm Wednesday, September 20, 2017
Room 124 HRBB

Abstract

In recent years, how to effectively create high-quality 3D models for complex objects including trees and mechanisms have attracted a lot of attention. In this talk, I will present two recent works in this direction. The first work is an effective modeling technique to generate a set of morphologically diverse and inspiring virtual trees through hierarchical topology-preserving blending, aiming to facilitate designers’ creativity production. On top of that, I will further describe a morphing technique to generate high quality visual effects between topologically varying trees while preserving the topological consistency and botanical meanings of any in-between shapes as natural trees. The second work is a novel interactive system for mechanism modeling from multi-view images. Its key feature is that the generated 3D mechanism models contain not only geometric shapes but also internal motion structures: they can be directly animated through kinematic simulation.

Biography

Dr. Zhigang Deng is a (Full) Professor of Computer Science at University of Houston (UH). He is also the Director of Graduate Studies at UH Computer Science Department and the Director of the UH Computer Graphics and Interactive Media (CGIM) Lab. He earned Ph.D. in Computer Science at the University of Southern California in 2006. He also completed B.S. degree in Mathematics from Xiamen University (China), and M.S. in Computer Science from Peking University (China). Over the years, He has worked or consulted at the Founder Research and Development Center (China), AT&T Shannon Research Lab, and Qualcomm Research Center. His current research interests are in the broad, interdisciplinary areas of graphics/animation, human computer interaction, virtual human modeling & animation, affective computing, and humanoid robots. He is the recipient of many awards including CASA Best Paper Award (2017), ACM ICMI Ten Year Technical Impact Award (2014), UH Teaching Excellence Award (2013), NSFC Overseas and Hong Kong/Macau Young Scholars Collaborative Research Award (2013), ICRA Best Medical Robotics Paper Award Runner-up (2012), and Google Faculty Research Award (2010). Besides the CASA 2014 conference general co-chair and SCA 2015 conference general co-chair, he currently serves as an Associate Editor of several journals including Computer Graphics Forum, and Computer Animation and Virtual Worlds Journal. His research has been funded by NSF, NIH, NASA, DOD, Texas NHARP, and various industry sources (Google, Nokia, NVidia, etc). More information can be found at his webpage, http://graphics.cs.uh.edu/zdeng

Faculty Contact: Dr. Shinjiro Sueda


Exploiting Low-Quality Visual Data using Deep Networks

Zhangyang (Atlas) Wang
Assistant Professor
Texas A&M Univeristy

4:10pm Monday, September 25, 2017
Room 124 HRBB

Abstract

While many sophisticated models are developed for visual information processing, very few pay attention to their usability in the presence of data quality degradations. Most successful models are trained and evaluated on high quality visual datasets. On the other hand, the data source often cannot be assured of high quality in practical scenarios. For example, video surveillance systems have to rely on cameras of very limited definitions, due to the prohibitive costs of installing high-definition cameras all around, leading to the practical need to recognize objects reliably from very low resolution images. Other quality factors, such as occlusion, motion blur, missing data and bad weather conditions, are also ubiquitous in the wild. The seminar will present a comprehensive and in-depth review, on the recent advances in the robust sensing, processing and understanding of low-quality visual data, using deep learning methods. I will mainly show how the image/video restoration and the visual recognition could be jointly optimized as one pipeline. Such an end-to-end optimization consistently achieves the superior performance over the traditional multi-stage pipelines. I will also demonstrate how our proposed approach largely improves a number of real-world applications.

Biography

Dr. Zhangyang (Atlas) Wang is an Assistant Professor of Computer Science and Engineering (CSE), at the Texas A&M University (TAMU). During 2012-2016, he was a Ph.D. student in the Electrical and Computer Engineering (ECE) Department, at the University of Illinois at Urbana-Champaign (UIUC), working with Professor Thomas S. Huang. Prior to that, he obtained the B.E. degree at the University of Science and Technology of China (USTC), in 2012. Dr. Wang's research has been addressing machine learning, computer vision and multimedia signal processing problems using advanced feature learning and optimization techniques. He has co-authored around 40 papers, and published several books and chapters. He has been granted 3 patents, and has received over 15 research awards and scholarships. His research has been covered by worldwide media, such as BBC, Fortune, International Business Times, UIUC news and alumni magazine. More could be found at: http://www.atlaswang.com

Faculty Contact: Dilma Da Silva


SuiteSparse:GraphBLAS: graph algorithms via sparse matrix operations on semirings

Tim Davis
Professor
Texas A&M University

4:10pm Monday, October 2, 2017
Room 124 HRBB

Abstract

SuiteSparse:GraphBLAS is an full implementation of the GraphBLAS standard, which defines a set of sparse matrix operations on an extended algebra of semirings using an almost unlimited variety of operators and types. When applied to sparse adjacency matrices, these algebraic operations are equivalent to computations on graphs. GraphBLAS provides a powerful and expressive framework for creating graph algorithms based on the elegant mathematics of sparse matrix operations on a semiring.

Performance of SuiteSparse:GraphBLAS is either on par with the corresponding operations in MATLAB, or faster. Submatrix assignment is particularly efficient. In one example, C(I,J)=A for a matrix C of size 3 million-by- 3 million with 14 million nonzeros, and a matrix A of size 5500-by-7000 with 38500 nonzeros, takes 82 seconds in MATLAB but only 0.74 seconds in SuiteSparse:GraphBLAS. This result includes finalizing the computation and returning the result to MATLAB as a valid sparse matrix. SuiteSparse:GraphBLAS also includes a non-blocking mode, so that a sequence of submatrix assignments can be still more efficient.

Biography

Tim Davis is a Professor in the Computer Science and Engineering Department at Texas A&M University. His primary scholarly contribution is the creation of widely-used sparse matrix algorithms and software. As an NVIDIA Academic Partner, he is creating a new suite of highly-parallel sparse direct methods that can exploit the high computational throughput of recent GPUS. He was elected in 2013 as a SIAM Fellow, in 2014 as an ACM Fellow, and in 2016 as an IEEE Fellow. He serves as an associate editor for ACM Transactions on Mathematical Software, and the SIAM Journal on Scientific Computing. Tim is a Master Consultant to The MathWorks, and the primary author of x=A\b in MATLAB when A is sparse.

Faculty Contact: Dr. Lawrence Rauchwerger


A Sub-linear Time Algorithm for Pattern Matching in Big Data

Krishna Narayanan
Eric D. Rubin '06 Professor
Texas A&M University

4:10pm Monday, October 9, 2017
Room 124 HRBB

Abstract

We are witnessing an unprecedented growth in the amount of data that is being collected and made available for data mining. While the availability of large-scale datasets presents exciting opportunities for advancing sciences, healthcare, understanding of human behavior etc., mining the data set for useful information becomes a computationally challenging task. We are in an era where the volume of data is growing faster than the rate at which available computing power is growing, thereby creating a dire need for computationally efficient algorithms for data mining. From an algorithmic complexity standpoint, we are transitioning from a mindset where algorithms with linear complexity in the size of the data set were considered efficient to an era when algorithms with linear complexity have become infeasible owing to the large size of the datasets. This necessitates the creation of algorithms with sub-linear time complexity tailored to big data.

One of the most fundamental data analytics tasks is that of querying a data set to see if a particular pattern of symbols appears in the data set either exactly or approximately. We assume that sketches of the original signal can be computed offline and stored. After providing a brief overview of existing algorithms for fast substring matching, I will describe our algorithm for substring matching which leverages the sparse Fourier transform computation-based approach introduced by Pawar and Ramchandran. We show that our algorithm can find matches with high probability (asymptotically in the size of the dataset and the query) with sub-linear time complexity.

Potential applications of this work include text matching, audio/image matching, DNA matching in genomics, metabolomics, radio astronomy, searching for signatures of events within large databases, detecting viruses within binary executable files. I am actively looking for collaborators who can use fast pattern matching in their area of expertise.

Biography

Krishna Narayanan is the Eric D. Rubin'06 professor in the ECEN department at Texas A&M University. His research interests are in coding theory, information theory, and signal processing with applications to wireless communications, data storage, and data science. On the teaching side, he is excited by the use of technological tools to personalize the learning experience of students. He is a Fellow of the IEEE. He currently serves as an associate editor for the IEEE Transactions on Information Theory and also serves on the board of governors for the IEEE Information Theory society. When he is not matching patterns at work, he (mostly unsuccessfully) tries to identify patterns when listening to Indian classical music. He is also a self-proclaimed expert in analyzing cricket matches.

Faculty Contact: Dr. Anxiao (Andrew) Jiang


Distributed Algorithmic Foundations of Large-scale Data Computation

Gopal Pandurangan
Professor
University of Houston

4:10pm Monday, October 16, 2017
Room 124 HRBB

Abstract

Motivated by the emergence of distributed ``Big Data" computing, we develop a theory of distributed computing for large-scale data. Our computation model is a distributed message-passing model called the k-machine model, where we have k machines that jointly perform a computation on some input data of size n (typically, n is much larger than k). The input is assumed to be partitioned among the k machines, which is a common situation in many real world systems. In particular, we focus on computation on graphs, and we present several complexity results --- both lower and upper bounds --- for various fundamental graph problems such as verifying graph connectivity, constructing a minimum spanning tree, computing the PageRank, and enumerating triangles. Our model provides a unified framework to design and analyze distributed algorithms for large-scale problems as well as quantity the fundamental limitations of distributively solving problems where the input is partitioned across a distributed system.

Biography

Gopal Pandurangan (http://www.cs.uh.edu/~gopal) is a Professor in the Department of Computer Science at the University of Houston. He received his Ph.D. in Computer Science from Brown University in 2002. He has held faculty and visiting positions at Nanyang Technological University in Singapore, Brown University, Purdue University, and Rutgers University. His research interests are in theory and algorithms, distributed computing, networks, large-scale data, and computational biology. He has published over 100 refereed papers in these areas. His work has appeared in JACM, SICOMP, ACM TALG, STOC, FOCS, SODA, PODC, SPAA, INFOCOM, and RECOMB. His research has been supported by research grants from the U.S. National Science Foundation, U.S.-Israeli Binational Science Foundation, and the Singapore Ministry of Education.

Faculty Contact: Dr. Jennifer Welch


Detecting and Identifying Sign Language Content on Video Sharing Sites

Frank Shipman
Professor
Texas A&M University

4:10pm Monday, October 23, 2017
Room 124 HRBB

Abstract

Video sharing websites are used by members of the deaf and hard of hearing community to exchange signed content. Unfortunately, these services lack the ability to search and locate untagged or unlabeled sign language content. As a result, members of this community rely on ad-hoc mechanisms to pass around pointers to internet-based recordings, such as email, blogs, etc. To remedy this situation, we are developing techniques that automatically detect sign language video and that can distinguish between different sign languages. This talk describes our initial video-analysis techniques to detect sign language content, three optimization strategies to reduce the computational costs associated with such detection, and techniques to distinguish between different sign languages.

Biography

Frank Shipman is a Professor in the Department of Computer Science and Engineering at Texas A&M University. His research interests include topics in computer-supported cooperative work, multimedia, computers and education, and intelligent user interfaces and has published over 150 peer-reviewed papers on these topics. His current projects explore (1) issues surrounding social media ownership, (2) the potential of prediction games to promote data analysis skills, and (3) access to sign language video for the deaf and hard-of-hearing.

Faculty Contact: TBA


Internet privacy: Towards more transparency

The Texas A&M Cybersecurity Distinguished Lecture Series

Balachander Krishnamurthy
Lead Inventive Scientist
AT&T Labs – Research

4:10pm Monday, October 30, 2017
Room 124 HRBB

Abstract

Internet privacy has become a hot topic recently with the radical growth of Online Social Networks (OSN) and attendant publicity about various leakages. For the last several years we have been examining aggregation of user's information by a steadily decreasing number of entities as unrelated Web sites are browsed. I will present results from several studies on leakage of personally identifiable information (PII) via Online Social Networks and popular non-OSN sites. Linkage of information gleaned from different sources presents a challenging problem to technologists, privacy advocates, government agencies, and the multi-billion-dollar online advertising industry. Economics might hold the key in increasing transparency of the largely hidden exchange of data in return for access of so-called free services. I will also talk briefly about transient online social networks and doing privacy research at scale.

Biography

Balachander Krishnamurthy is a lead inventive scientist at AT&T Labs Research. His focus of research is in the areas of Internet privacy, transparency and fairness in ML algorithms, and Internet measurements. He has authored and edited ten books, published over one hundred technical papers, holds seventy-five patents, and has given invited talks in thirty-five countries.

He co-founded the successful ACM Internet Measurement Conference in 2000 and in 2013 the Conference on Online Social Networks and is involved in the Data Transparency Lab efforts to fund privacy research. He has been on the thesis committee of several Ph.D. students, collaborated with over eighty researchers worldwide, and given tutorials at several industrial sites and conferences.

His book "Internet Measurements: Infrastructure, Traffic and Applications" (525pp, Wiley, with Mark Crovella) is the first book focusing on Internet Measurement. His previous book with Jen Rexford, 'Web Protocols and Practice: HTTP/1.1, Networking Protocols, Caching, and Traffic Measurement' (672 pp, Addison-Wesley), is the first in-depth book on the technology underlying the World Wide Web, and has been translated into Portuguese, Japanese, Russian, and Chinese. Bala is homepage-less and not on any OSN but many of his papers can be found at http://www.research.att.com/~bala/papers.

Faculty Contact: Dr. Nick Duffield


A Generic Transformation for Optimal Repair Bandwidth and Rebuilding Access in MDS Codes

Chao Tian
Associate Professor
Texas A&M University

4:10pm Wednesday, November 1, 2017
Room 124 HRBB

Abstract

Big data analytics need to be supported by large scale data storage systems, and there has been considerable effort in designing highly efficient and also highly reliable storage systems. For such systems, the repair of data on failed storage devices is a major factor impacting the overall system performance. The conventional replication-based storage system has a simple data repair mechanism, however the storage efficiency is very low. On the other hand, maximum distance separable (MDS) codes such as the Reed-Solomon code, are optimal in terms of the storage efficient, but data repair is difficult for such codes. This raised the question of how to design new MDS codes with the least amount of repair bandwidth (or rebuilding access) to facilitate efficient data repair, and this problem has attracted significant recent research activities. We provide a simple solution to this problem based on a generic transformation on any existing MDS code. This transformation consists of sequentially stacking, permuting, and pairwise transforming coded symbols, and it can essentially convert any existing MDS code into a new code with certain desired repair property. Applications of this transformation allow us to construct repair-efficient codes of various forms.

This talk is partly based on a joint paper with Jie Li and Xiaohu Tang, which was the winner of the 2017 Jack Keil Wolf ISIT Student Paper Award.

Biography

Dr. Chao Tian received the B.E. degree in Electronic Engineering from Tsinghua University, Beijing, China, in 2000 and the M.S. and Ph. D. degrees in Electrical and Computer Engineering from Cornell University, Ithaca, NY in 2003 and 2005, respectively. He was a postdoctoral researcher at Ecole Polytechnique Federale de Lausanne (EPFL) from 2005 to 2007, then a member of technical staff--research at AT&T Labs--Research in New Jersey from 2007 to 2014, and an Associate Professor in the Department of Electrical Engineering and Computer Science at the University of Tennessee Knoxville from 2014 to 2017. He joined the Department of Electrical and Computer Engineering at Texas A&M University in 2017, where he is now an Associate Professor. His research interests include data storage systems, information theory, computer algorithms, as well as image and video processing. Dr. Tian received the Liu Memorial Award at Cornell University in 2004, and 2014 IEEE Data Storage Best Paper Award.

Faculty Contact: Dr. Anxiao (Andrew) Jiang


On the Hardness of Counting in Algorithmic Number Theory

J. Maurice Rojas
Professor of Mathematics and Computer Science and Engineering
Texas A&M University

4:10pm Monday, November 6, 2017
Room 124 HRBB

Abstract

The boundary between polynomial-time algorithms and #P-complete algorithms for counting number-theoretic objects (like points on elliptic curves or lattice points in convex bodies) is vast and mysterious. We will briefly survey some old and recent results, some quite relevant to cryptology, and end with a new polynomial-time algorithm for counting roots of polynomials over prime power rings. The best previous algorithm was exponential-time, even for counting roots mod p^2. This is joint work with Qi Cheng, Shuhong Gao, and Daqing Wan.

Biography

J. Maurice Rojas is a mathematician working at the intersection of complexity theory, algebraic geometry, and number theory. He is currently a full professor of mathematics and (by courtesy) the computer science and engineering at Texas A&M. He obtained his applied mathematics Ph.D. from UC Berkeley in 1995 (under the guidance of Fields Medalist Steve Smale) and his computer science M.S. in 1991 (under the guidance of John Canny). He has held visiting positions at TU Munich, ENS Lyon, John Hopkins University, MSRI, IMA, Sandia National Laboratories, MIT, and CCR. Rojas was research director for the MSRI UP program last summer, a von Neumann visiting professor at TU Munchen in 2015, a winner of the 2013 ISSAC distinguished paper award for his work on sparse polynomials over finite fields (joint with J. Bi and Q. Cheng) and, earlier in his career, he was an NSF CAREER Fellow and an NSF Postdoc. He has also run a successful NSF sponsored REU on algorithmic algebraic geometry over the past 13 years.

Faculty Contact: Dr. Nancy M. Amato


Balancing Naturalness, Convenience, and Comfort for Interaction Technique in Virtual Reality

Eric Ragan
Assistant Professor
Texas A&M University

4:10pm Monday, November 13, 2017
Room 124 HRBB

Abstract

For virtual reality (VR), the goal is often to achieve a realistic simulation that closely matches the experiences of the real world. It would be ideal if users could freely walk around and use their physical hands to directly interact with the virtual environment. While this can be achieved via interaction with tracking technology, practical limitations can make it difficult to simulate a variety of situations with a single system. For example, virtual environments are often much larger than the available tracked physical space, and virtual worlds can have more ways to interact than a typical physical space supports. A related issue with realistic interaction is that large amounts of physical movement are often not preferred for comfortable and convenient use of technology. Our research investigates interaction techniques that balance the level of realism with level of convenience for practical real-world uses of VR. We study semi-natural methods for navigation and view control that can work for seated use of virtual reality with HMDs when physically turning all the way around is not ideal, such as when sitting on a couch or at a desk. We also explore the use of perceptual illusions and haptics to allow direct hand interaction through physical props. This talk will provide an overview of the technical and practical considerations important for the design of convenient 3D interaction techniques, and it will present the results of empirical studies of how different techniques affect users’ spatial orientation, sickness, and experiences in VR.

Biography

Dr. Eric Ragan is an Assistant Professor in the Department of Visualization at Texas A&M University, and he is a joint faculty member in the Department of Computer Science and Engineering. He directs the Interactive Data and Immersive Environments (INDIE) Lab. His research interests include human-computer interaction, information visualization, visual analytics, virtual reality, and 3D interaction techniques. He previously worked as a visual analytics research scientist at Oak Ridge National Laboratory, where he studied visualization designs that enable monitoring and analysis of streaming data. Current research topics include the visualization of analytic provenance, understandable visual interfaces for machine learning systems, and the natural interaction techniques for immersive virtual environments. Dr. Ragan received his Ph.D. in computer science from Virginia Tech. Contact him at eragan@tamu.edu.

Faculty Contact: Dr. John Keyser


Computer Aided Storytelling

Ergun Akleman
Professor
Texas A&M University

4:10pm Monday, November 20, 2017
Room 124 HRBB

Abstract

We have developed a general framework that helps construct non-deterministic models of social and economical interactions. Our framework synthesizes standard narratology and graph theory, representing actants by vertices and their relations by edges. An event is treated as a transformation of graph structure, and is specified by narratological story grammars. This framework provides analytical tools and helps investigate basic economic decision making problems such as ”framing” as observed in behavioral economics. A special application of this method is to create never-ending stories. We have earlier developed a visual storytelling system that helps users to interactively create narratives using discrete-time stochastic processes with a Markov property. This visual storytelling system can help users to create never-ending narrative segments in cartoon form. During the presentation, we will also create stories.

Biography

Ergun Akleman is a Professor in the Departments of Visualization and Computer Science. He has been at Texas A&M University since 1995. He received his Ph.D. degree in Electrical and Computer Engineering from the Georgia Institute of Technology in 1992. He is also a professional cartoonist, illustrator and caricaturist who have published more than 500 cartoons, illustrations and caricatures. His research work is interdisciplinary, usually motivated by aesthetic concerns. He has published extensively in the areas of shape modeling, image synthesis, artistic depiction, image based lighting, texture and tiles, computer aided caricature, electrical engineering and computer aided architecture.

Faculty Contact: Dr. Nancy M. Amato


But Why Does It Work?
A Rational Protocol Design Treatment of Bitcoin

Juan Garay
Professor
Texas A&M University

4:10pm Monday, December 4, 2017
Room 124 HRBB

Abstract

An exciting recent line of work has focused on formally investigating the core cryptographic assumptions underlying the security of Bitcoin. In a nutshell, these works conclude that Bitcoin is secure if and only if the majority of the mining power is honest Despite their impact, however, these works do not address an incisive question by positivists and Bitcoin critics, which is fueled by the fact that Bitcoin indeed works in reality: Why should the real-world system adhere to these assumptions?

In this work we employ the machinery from the Rational Protocol Design (RPD) framework by Garay et al. [FOCS '13] to analyze Bitcoin and address questions such as the above. We show that under the natural class of incentives for the miners' behavior -- i.e., rewarding them for adding blocks to the blockchain but having them pay for mining -- we can reserve the honest majority assumption as a fallback, or even, depending on the application, completely replace it by the assumption that the miners aim to maximize their revenue.

Our results underscore the appropriateness of RPD as a "rational cryptography'' framework for analyzing Bitcoin. Along the way, we devise significant extensions to the original RPD machinery that broaden its applicability to cryptocurrencies, which we believe may be of independent interest.

This is joint work with Christian Badertscher, Ueli Maurer, Daniel Tschudi, and Vassilis Zikas.

Biography

Since the beginning of Fall '17, Juan A. Garay is a full professor with the CSE Department. Previously, after receiving his PhD in Computer Science from Penn State, he was a postdoc at the Weizmann Institute of Science (Israel), and held research positions at the IBM T.J. Watson Research Center, Bell Labs, AT&T Labs--Research, and Yahoo Research. His research interests include both foundational and applied aspects of cryptography and information security. He has published extensively in the areas of cryptography, network security, distributed computing, and algorithms; has been involved in the design, analysis and implementation of a variety of secure systems; and is the recipient of over two dozen patents. Dr. Garay has served on the program committees of numerous conferences and international panels---including co-chairing Crypto 2013 and 2014, the discipline's premier conference.

Faculty Contact: Dr. Dilma Da Silva


GraphBLAS API: A Linear Algebra Programming API for Large-Scale Graph Computations

Hao Yu
Research Staff Member
IBM T.J. Watson Research Center

4:10pm Wednesday, December 6, 2017
Room 124 HRBB

Abstract

Graph-based algorithms play a crucial role in work- flows for analyzing large datasets in domains such as social networks, biology, fraud detection, and sentiment analysis. GraphBLAS is the standardization of the key building blocks for matrix-based graph operations that can be used to implement a wide class of graph algorithms in a wide range of programming environments. An important part of this standardization effort is to translate the mathematical specification into an actual Application Programming Interface (API) that (i) is faithful to the algorithmic formulation and (ii) enables efficient implementations on modern hardware. Together with academic communities, IBM Research Commercial System Group actively participated in the definition, implementation, and validation of GraphBLAS. In this talk, we will discuss a GraphBLAS-like implementation in Scala in Spark that validates the premises of GraphBLAS on concise programming with performance advantage compared to the vertex-centric approach.

Biography

Hao Yu received the BS and MS degrees in computer science from Tsinghua University, Beijing, China, in 1994 and 1997. Supervised by Lawrence Rauchwerger, he received the PhD degree in computer science from Texas A&M University, College Station. Hao Yu is currently a research staff member at the IBM T.J. Watson Research Center in Yorktown Heights, New York. Since he joined IBM Research, he has worked on multiple research and development efforts, including Blue Gene/L supercomputer, intrusion prevention systems, risk analytics, and big data graph analytics. His research interests are in performance driven optimization for high-performance and commercial computing services, systems, system software, and applications.

Faculty Contact: Dr. Lawrence Rauchwerger


Spring 2018 Abstracts

 

Understanding the Interaction of Law, Policy, and Technology: The Inherent Conflict for Computer Scientists and Engineers

May We Live in Interesting Times!

Paula deWitte
Associate Professor of Practice
Texas A&M University

4:10pm Wednesday, January 17, 2018
Room 124 HRBB

Abstract

The interaction between law, policy, and technology provides a challenge to computer scientists/engineers. Law necessarily lags technology. With technical advances (e.g., digital currency, data mining) comes new opportunities for compliance and governance, regulations, and crimes. Legislatures and courts respond with new laws or apply existing laws that may poorly fit. This tension between law and technology is keenly visible in cybersecurity where there is an uncertain legal and policy framework for “hacking back,” protecting privacy, or applying Fourth Amendment protections.

Even if not directly involved in cybersecurity, computer scientists/engineers are routinely exposed to more sensitive data that they are responsible for protecting than other professions. Systems could be processing data that contains personally identifiable information under myriad specific U.S. data protection laws such as HIPAA (medical data) or Gramm-Leach-Biley (financial sector). Yet the technical definition of what constitutes “personally identifiable information” changes with advances in, among other areas, biometrics. Or computer scientists/engineers may be responsible for intellectual property data that is subject to surreptitious “low and slow” advanced persistent threat cyber attacks. Internet communication that transcends geographical borders complicates traditional legal concepts of jurisdiction in pursuing cyber attackers. Or currently where the European Union under the 2018 General Data Protection Regulation is claiming “extra-territorial” jurisdiction over privacy data of resident or Residents of European Union countries no matter where they are physically living.

This talk will discuss new and emerging issues and laws that affect computer science and engineering professionals and provide a brief overview of laws or areas of concern.

Biography

Paula deWitte received her BS and MS from Purdue University, her Ph.D. in Computer Science from Texas A&M University in 1989, and her Juris Doctorate from St. Mary’s University in 2008. She started her first company in College Station while pursuing her Ph.D. and then worked as an executive in Austin technology community for several years before attending law school. She consulted in Houston in the oil and gas industry. She is a co-inventor on a patent for optimizing drilling fluids (“mud”) during the drilling process. A second patent in cybersecurity incident response in the operational technology environment was submitted through the EPO and is now in the USPTO. In 2016, Dr. deWitte obtained her patent license with the USPTO, and in 2017 she accepted her present position at Texas A&M University as Assistant Director of the Texas A&M Cybersecurity Center and an Associate Professor of Practice in Computer Science and Engineering. Her academic interests are in the interaction of law, policy, and technology.

Faculty Contact: Dr. Dilma Da Silva


Playing at Planning: Developing Games to Support Disaster Responders

Zach Toups
Assistant Professor
New Mexico State University

4:10pm Monday, January 22, 2018
Room 124 HRBB

Abstract

We draw on years of ethnographic investigation into the disaster response practices of fire emergency response, urban search and rescue, and incident command to inform the design of games. Our objective is to support training disaster responders, yet our findings apply to general game design. We identify critical components of disaster response practice, from which we develop game design patterns: emergent objectives, developing intelligence, and collaborative planning. We expect that, in implementing these patterns, designers can engage players in disaster-response-style planning activities. To support the design patterns, we survey exemplar games, through case studies. We contribute a set of game design patterns that support designers in building games that engage players in planning activities.

Biography

Zachary O. Toups is an Assistant Professor of Computer Science at New Mexico State University, having started in August 2013, where his research areas include intersections of collaboration support, game design, wearable computers, and disaster response. His current projects explore how game players gather and share information in-game; how games can teach disaster planning activities; how wearables can support human-human and human-drone collaboration; and how games can be used to design and test wearable computer interfaces. He asserts that digital game play is the human-computer interface in its purest form; people play games in order to experience interfaces.

In his present position, Prof. Toups has brought in over 1 million USD in research funding from the us National Science Foundation and Army Research Lab, including an NSF CAREER award. He directs the Play and Interactive Experiences for Learning (PIxL) Lab, which supports five Ph.D. students and a number of M.S. students. He is the Co-Chair for the ACM SIGCHI Annual Symposium on Human-Computer Interaction and Play (CHI PLAY) for 2016 and 2017, and has served in a number of roles for CHI PLAY, the ACM SIGCHI Conference on Human Factors in Computing Systems, and other conferences. Prof. Toups received his Ph.D. in Computer Science from Texas A&M University in 2010 and his B.A. of Computer Science from Southwestern University in 2003. In between his Ph.D. and appointment at NMSU, he researched information technology to support disaster responders at the Texas A&M Engineering Extension Service with the Texas Task Force 1 elite urban search and rescue group.

Faculty Contact: Dr. Andruid Kerne


High performance computing in heterogeneous systems, GPUS, and other beasts. Did you think they were difficult to program?

Arturo Gonzalez-Escribano
Associated Professor
Universidad de Valladolid

4:10pm Wednesday, January 24, 2018
Room 124 HRBB

Abstract

A wide range of computing systems, from big supercomputers to small devices, like those embedded on mobile phones, are increasingly including more and more computing elements, and even of heterogeneous types. Exploiting parallelism is key for squeezing all the computing power of these systems. Parallel programming tools and compilers are evolving, including more and more abstractions and techniques to help the programmer. But we are still facing new unexpected challenges when programming for high-performance. When heterogeneity is considered, combining different types of devices that can use even different native programming models, the things become even more threatening for the unaware user.

In this talk I will discuss about the relation between architecture knowledge, high-performance, and parallel computing. Is it so different to program targeting a GPU or a multicore? What can do, and what cannot do a compiler for me? What are the implications of trying to combine different types of devices in a heterogeneous computation? Is it really so difficult to program for these systems? Finally, I will introduce some of the research lines and projects of the Trasgo group at the University of Valladolid, where we are trying to deal with some of these issues.

Biography

Dr. Arturo Gonzalez-Escribano obtained his PhD. in Computer Science from the University of Valladolid in 2003. He became Associate Professor at the Department of Computer Science of the same University in 2008. He has participated and led several research projects funded by the regional or national governments, and in 2015 he leaded the Spanish national excellence research network for Parallel Programming in Heterogeneous Systems. He has also led several collaborative projects with local enterprises, using high performance techniques on embedded systems, transportation technology, scientific simulations, and social analysis. His research interests are mainly focused on High Performance, Parallel Computing, and Heterogeneous Systems.

Faculty Contact: Dr. Lawrence Rauchwerger


Computer Vision and Machine Learning for Cancerous Tissue Recognition

Nikos Papanikolopoulos
McKnight Presidential Endowed Professor, Distinguished McKnight University Professor
University of Minnesota

4:10pm Monday, January 29, 2018
Room 124 HRBB

Abstract

Today, vast and unwieldy data collections are regularly being generated and analyzed in hopes of supporting an ever-expanding range of challenging sensing applications. Modern inference schemes usually involve millions of parameters to learn complex real-world tasks, which creates the need for large annotated datasets for training. For several visual learning applications, collecting large amounts of annotated data is either challenging or very expensive; one such domain is medical image analysis. In this work, machine learning methods were devised with emphasis on Cancerous Tissue Recognition (CTR) applications.

First, a lightweight active constrained clustering scheme was developed for the processing of image data which capitalizes on actively acquired pairwise constraints. The proposed methodology introduces the use of the Silhouette values, conventionally used for measuring clustering performance, in order to rank the degree of information content of the various samples. Second, an active selection framework that operates in tandem with Convolutional Neural Networks (CNNs) was constructed for CTR. In the presence of limited annotations, alternative (or sometimes complementary) venues were explored in an effort to restrain the high expenditure of collecting image annotations required by CNN-based schemes.

Third, a Symmetric Positive Definite (SPD) image representation was derived for CTR, termed Covariance Kernel Descriptor (CKD) which consistently outperformed a large collection of popular image descriptors. Even though the CKD successfully describes the tissue architecture for small image regions, its performance decays when implemented on larger slide regions or whole tissue slides due to the larger variability that tissue exhibits at that level, since different types of tissue can be present as the regions grow (healthy, benign disease, malignant disease). Fourth, to leverage the recognition capability of the CKDs to larger slide regions, the Weakly Annotated Image Descriptor (WAID) was devised as the parameters of classifier decision boundaries in a multiple instance learning framework.

*This is joint work with Panos Stanitsas and Sasha Truskinovsky

Biography

Nikolaos P. Papanikolopoulos (IEEE Fellow) received the Diploma degree in electrical and computer engineering from the National Technical University of Athens, Athens, Greece, in 1987, the M.S.E.E. in electrical engineering from Carnegie Mellon University (CMU), Pittsburgh, PA, in 1988, and the Ph.D. in electrical and computer engineering from Carnegie Mellon University, Pittsburgh, PA, in 1992. Currently, he is a McKnight Presidential Endowed Professor and a Distinguished McKnight University Professor in the Department of Computer Science at the University of Minnesota and Director of the Center for Distributed Robotics and SECTTRA. His research interests include robotics, computer vision, sensors for transportation applications, and control. He has authored or co-authored more than 350 journal and conference papers in the above areas (more than seventy five refereed journal papers).

Faculty Contact: Dr. Nancy M. Amato


Sampling-Based Motion Planning: From Manipulation Planning to Intelligent CAD to Protein Folding

Nancy Amato
Regents and Unocal Professor
Texas A&M University

4:10pm Monday, February 12, 2018
Room 124 HRBB

Abstract

Motion planning has application in robotics, animation, virtual prototyping and training, and even for seemingly unrelated tasks such as evaluating architectural plans or simulating protein folding. Surprisingly, sampling-based planning methods have proven effective on problems from all these domains. In this talk, we provide an overview of sampling-based planning and describe some variants developed in our group, including strategies suited for manipulation planning and for user interaction. For virtual prototyping, we show that in some cases a hybrid system incorporating both an automatic planner and haptic user input leads to superior results. Finally, we describe our application of sampling-based motion planners to simulate molecular motions, such as protein and RNA folding.

Biography

Nancy M. Amato is Regents Professor and Unocal Professor of Computer Science and Engineering at Texas A&M University where she co-directs the Parasol Lab. Her main areas of research focus are robotics and motion planning, computational biology and geometry, and parallel and distributed computing. She received undergraduate degrees in Mathematical Sciences and Economics from Stanford University, and M.S. and Ph.D. degrees in Computer Science from UC Berkeley and the University of Illinois, respectively. She is Vice President for Member Activities of the IEEE Robotics and Automation Society, and she served as Program Chair for the 2015 IEEE International Conference on Robotics and Automation (ICRA) and for Robotics: Science and Systems (RSS) in 2016. She is an elected member of the CRA Board of Directors (2014-2017, 2017-2020), was Co-Chair of CRA-Women (2014-2017) and Co-Chair of the NCWIT Academic Alliance (2009-2011), and has served on the Academic Advisory Council of AnitaB.org since 2015. Her honors include the CRA Habermann Award, the NCWIT Harrold and Notkin Research and Graduate Mentoring Award, the IEEE Hewlett Packard/Harriet B. Rigas Award, and a Texas A&M University-level teaching award. She is a Fellow of the AAAI, AAAS, ACM, and IEEE.

Faculty Contact: Dr. Nancy Amato


Building Concepts Bridges: Knowledge Discovery from Literature and Beyond

Aidong Zhang
SUNY Distinguished Professor
University at Buffalo

4:10pm Monday, February 19, 2018
Room 124 HRBB

Abstract

With the growth of world wide web and large-scale digitization of documents, we are overwhelmed with massive information, formally through publication of various scientific journals or informally through internet. As an example, consider MEDLINE, a premier bibliographic database in life sciences, with currently more than 23 million references from approximately 5,600 worldwide journals. As a consequence, Literature Based Discovery has become a sub-field of Text Mining that leverages these published articles to formulate hypotheses. In this talk, I will discuss how a self-learning based framework for knowledge discovery can be designed to mine hidden associations between non-interacting scientific concepts by rationally connecting independent nuggets of published literature. The self-learning process can model the evolutionary behavior of concepts to uncover latent associations between text concepts, which allows us to learn the evolutionary trajectories of text terms and detect informative terms in a completely unsupervised manner. Hence, meaningful hypotheses can be efficiently generated without prior knowledge. I will also discuss how this self-learning framework can be extended to include social media and Internet forums. With the capability to discern reliable information from various sources, this self-learning framework provides a platform for combining heterogeneous sources and intelligently learning new knowledge with no user intervention.

Biography

Dr. Aidong Zhang is a SUNY Distinguished Professor of Computer Science and Engineering at the State University of New York (SUNY) at Buffalo where she served as Department Chair from 2009 to 2015. She is currently on leave and serving as Program Director in the Information & Intelligent Systems Division of the Directorate for Computer & Information Science & Engineering, National Science Foundation. Her research interests include data analytics/data science, machine learning, bioinformatics, and health informatics, and she has authored over 300 research publications in these areas. Dr. Zhang currently serves as the Editor-in-Chief of the IEEE Transactions on Computational Biology and Bioinformatics (TCBB). She served as the founding Chair of ACM Special Interest Group on Bioinformatics, Computational Biology and Biomedical Informatics during 2011-2015 and is currently Chair of its advisory board. She is also the founding and steering chair of ACM international conference on Bioinformatics, Computational Biology and Health Informatics. She has served as editor for several other journal editorial boards, and has also chaired or served on numerous program committees of international conferences and workshops. Dr. Zhang is both an ACM Fellow and an IEEE Fellow.

Faculty Contact: Dr. Xia (Ben) Hu


SEC Faculty Travel Program Award Presentation: Formal Specification, Verification, & Falsification for Autonomous Cyber-Physical Systems with Hyperproperties & Hybrid Automata

Taylor Johnson
Assistant Professor
Vanderbilt University

4:10pm Monday, March 5, 2018
Room 124 HRBB

Abstract

The ongoing renaissance in artificial intelligence (AI) has led to the advent of machine learning methods deployed within components for sensing, actuation, and control in safety-critical cyber-physical systems (CPS), and is enabling autonomy in such systems, such as autonomous vehicles and swarm robots. However, as demonstrated in part through recent accidents in semi-autonomous/autonomous CPS and by adversarial machine learning, ensuring such components operate reliably in all scenarios is extraordinarily challenging. First, we will define and discuss specifying desired behaviors (e.g., for safety, security, robustness, and stability) using hyperproperties, which are sets of properties, where properties are classically defined in formal methods as sets of traces, so hyperproperties are sets of sets of traces and are effective in describing security specifications, such as noninterference. In our recent work, we have developed a real-time, real-valued temporal logic called hyperproperties for signal temporal logic (HyperSTL), which is useful for describing behaviors in autonomous CPS, and a somewhat surprising result is that Lyapunov stability is a hyperproperty and not a property. Next, we will discuss methods to falsify hyperproperties (i.e., to find sets of traces violating a hyperproperty) assuming only black-box models are available. Then we discuss methods to formally verify hyperproperties (i.e., to establish all sets of traces satisfy a hyperproperty) when formal models, such as hybrid automata, are available. We will discuss the application of these approaches in several CPS, such as motor vehicles and swarm robots. Finally, we will conclude with some architectural solutions that enhance trust and safety in autonomous CPS, building on supervisory control with the Simplex architecture, and will discuss future research directions for enhancing trust of machine learning components within CPS that we will soon explore as part of upcoming DARPA Assured Autonomy and NSA/DoD Science of Security Lablet projects. This presentation is supported through an SEC Faculty Travel Program award that we gratefully acknowledge.

Biography

Dr. Taylor T. Johnson is an Assistant Professor of Computer Engineering (CmpE), Computer Science (CS), and Electrical Engineering (EE) in the Department of Electrical Engineering and Computer Science (EECS) in the School of Engineering (VUSE) at Vanderbilt University (since August 2016), where he directs the Verification and Validation for Intelligent and Trustworthy Autonomy Laboratory (VeriVITAL) and is a Senior Research Scientist in the Institute for Software Integrated Systems (ISIS). Dr. Johnson serves as the President of a medical information technology startup firm, CelerFama, Inc., and as the Chief Technology Officer (CTO) of VeriVITAL, LLC, both of which serve for technology transfer and commercialization of his research group's results to industry. Dr. Johnson was previously an Assistant Professor of Computer Science and Engineering (CSE) at the University of Texas at Arlington (September 2013 to August 2016). Dr. Johnson earned a PhD in Electrical and Computer Engineering (ECE) from the University of Illinois at Urbana-Champaign in 2013, where he worked in the Coordinated Science Laboratory with Prof. Sayan Mitra, and earlier earned an MSc in ECE at Illinois in 2010 and a BSEE from Rice University in 2008. Dr. Johnson worked in industry for Schlumberger at various times between 2005 and 2010 developing novel embedded control systems for downhole tools. Dr. Johnson's research focus is developing formal verification techniques and software tools for cyber-physical systems (CPS). Dr. Johnson has published over 60 papers on these methods and their applications across CPS domains, such as power and energy, aerospace, automotive, transportation, biotechnology, and robotics, two of which were recognized with best paper awards, from the IEEE and IFIP, respectively, and one of which was awarded an ACM Best Software Repeatability Award. Dr. Johnson is a 2018 and 2016 recipient of the AFOSR Young Investigator Program (YIP) award, a 2015 recipient of the National Science Foundation (NSF) Computer and Information Science and Engineering (CISE) Research Initiation Initiative (CRII), and his research is / has been supported by AFOSR, ARO, AFRL, DARPA, NSA, NSF (CISE CCF/SHF, CISE CNS/CPS, ENG ECCS/EPCN), the MathWorks, NVIDIA, ONR, Toyota, and USDOT. Dr. Johnson is a member of AAAS, ACM, AIAA, IEEE, and SAE.

Faculty Contact: Dr. Dylan Shell


Which Neural Net Architectures Give Rise to Exploding and Vanishing Gradients?

Boris Hanin
Assistant Professor
Texas A&M University

4:10pm Wednesday, March 7, 2018
Room 124 HRBB

Abstract

Due to its compositional nature, the function computed by a deep neural net often produces gradients whose magnitude is either very close to 0 or very large. This so-called vanishing and exploding gradient problem is often already present at initialization and is a major impediment to gradient-based optimization techniques. The purpose of this talk is to give a rigorous answer to the question of which neural architectures have exploding and vanishing gradients for feed-forward neural nets with ReLU activations. The results discussed apply to both convolutional and fully connected networks, and they represent the first more or less complete characterization of exploding and vanishing gradients for feed-forward networks.

Biography

Boris Hanin is a mathematician working on theory of deep learning and semi-classical problems in mathematical physics. He received his PhD in mathematics from Northwestern University before spending three years as an NSF Postdoctoral Fellow in Mathematics at MIT. He joined the faculty at Texas A&M in Fall 2017.

Faculty Contact: Dr. Atlas Wang


Bazinga: Lessons from Claude Shannon to Sheldon Cooper

Eduardo Nakamura
Visiting Associate Professor
Texas A&M University

4:10pm Wednesday, March 21, 2018
Room 124 HRBB

Abstract

Sheldon is a character of the sitcom Big Bang Theory, who is a genius with very little social skills. In particular, Sheldon cannot easily tell when people are being sarcastic. The reason is that sarcasm depends on contextual social aspects, such as body language, cultural behavior, and voice tone, which Sheldon has no clue about. The Sheldon's problem is the same problem we face in online social networks when we have the "written message", but we do not have the other contextual social clues. When we have only a written message without any socio-behavioral context, sarcasm is unlikely to be perfectly detected, but we can still exploit all the clues within the text itself. In this research, we use the ideas of Information Theory, proposed by Claude Shannon, including Entropy, which accounts for the amount of current information to message has, and Jensen-Shannon divergence, which in our case measures how different a message is from an average sarcastic message. Our results show this is a promising approach that is on par with current methods but is computationally cheaper.

Biography

Eduardo Nakamura is a Visiting Associate Professor at Texas A&M and a Professor at Federal University of Amazonas, Brazil. He holds a Ph.D. degree in Computer Science from the Federal University of Minas Gerais, Brazil, 2007. His Ph.D. thesis was chosen as the best Brazilian Ph.D. thesis in Exact Sciences, awarded by the Brazilian Ministry of Education in 2009. He was also granted the IEEE Young Professional award, by the IEEE Communications Society Latin America, in 2009. In 2011, he was selected as an Affiliated Member of the Brazilian Academy of Sciences. His research interest is data-fusion algorithms for knowledge extraction. He has been applying data-fusion algorithms for wireless sensor networks, since his Ph.D., and he is currently applying data fusion for social network analytics. Eduardo is a member of ACM and IEEE, and has been served as member of technical committees of conferences sponsored by both associations.

Faculty Contact: Dr. Lawrence Rauchwerger


Transform Learning for Non-Local Image and Video Modeling

Bihan Wen
Ph.D. Candidate
University of Illinois at Urbana-Champaign

4:10pm Wednesday, March 28, 2018
Room 124 HRBB

Abstract

Techniques exploiting the sparsity of signals in a transform domain or dictionary have been popular in signal processing and computer vision. While the synthesis model based methods, such as the well-known dictionary learning, are widely used, the emerging sparsifying transform learning technique have received interest recently. It allows cheap and exact computations, and demonstrates promising performance in various applications including image and video processing, medical image reconstruction, and computer vision. In this talk, I will provide an overview of the transform learning problem. Several advanced data-driven transform models that we proposed will be discussed. Extending from local patch sparsity, I will show how non-local transform learning can be applied in image and video applications, and demonstrate state-of-the-art results.

Biography

Bihan Wen received the B.Eng. degree in electrical and electronic engineering from Nanyang Technological University, Singapore, in 2012 and the M.S. degree in electrical and computer engineering from the University of Illinois at Urbana-Champaign, USA, in 2015. He is currently a final-year Ph.D. candidate, working with Prof. Yoram Bresler at the University of Illinois at Urbana-Champaign. Bihan Wen has received Yee Fellowship Award in 2016, and PEB Gold Medal in 2012. His work was awarded 10% Best Paper in ICIP 2014. He was in the list of UIUC teachers ranked as excellent in 2013. His current research interests include machine learning, signal and image processing, low-rank/sparse representation, computer vision, and big data applications.

Faculty Contact: Dr. Zhangyang (Atlas) Wang


Cryptography for Cloud Security

Reyhaneh Safavi-Naeini
Visiting Professor, Texas A&M University
Professor, University of Calgary

4:10pm Monday, April 2, 2018
Room 124 HRBB

Abstract

Cloud computing has transformed our way of managing and interacting with information and information related services, while at the same time has created a plethora of security and privacy challenges for cloud users as well as cloud providers. Cloud security draws on a wide range of security technologies including cryptography, access control, sandboxing and software defined security. The focus of this talk is on the application of cryptographic algorithms and protocols for providing security guarantee for cloud services.

Traditional cryptographic algorithms and protocols can solve many of the security and privacy challenges that arise because of the outsourcing of data and computation to cloud. However, there are many other challenges that are unique to cloud and need new cryptographic systems to be designed from scratch. In this talk we first give examples of cryptographic systems from both types, and then look at some of the emerging challenges that are due to the rise of IoT and data analytics.

Biography

Rei Safavi-Naini is the AITF Strategic Research Chair in Information Security and the Director of Institute for Security, Privacy and Information Assurance, at the University of Calgary, Canada.

She has published widely in premier venues in information security and privacy, and has given numerous keynote talks, most recently at IEEE MASCOTS 2017 and the joint session of ICITS (International Conference on Information Theoretic Security) and CANS (Cryptology And Network Security) 2017.

She has served on the editorial board of major information security journals and is currently Associate Editor of IEEE Transactions on Information Theory, IET Information Security and Journal of Mathematical Cryptology. During Spring of 2018, she is a Visiting Professor in the Department of Computer Science and Engineering of Texas A&M, College Station.

Faculty Contact: Dr. Dilma Da Silva


Parallelism: To extract from old or to create a new?

Lawrence Rauchwerger
Eppright Professor
Texas A&M University

4:10pm Wednesday, April 4, 2018
Room 124 HRBB

Abstract

Parallel computers have come of age and need parallel software to justify their usefulness. There are two major avenues to get programs to run in parallel: parallelizing compilers and parallel languages and/or libraries. In this talk we present our latest results using both approaches and draw some conclusions about their relative effectiveness and potential.

In the first part we introduce the Hybrid Analysis (HA) compiler framework that can seamlessly integrate static and run-time analysis of memory references into a single framework capable of full automatic loop level parallelization. Experimental results on 26 benchmarks show full program speedups superior to those obtained by the Intel Fortran compilers.

In the second part of this talk we present the Standard Template Adaptive Parallel Library (STAPL) based approach to parallelizing code. STAPL is a collection of generic data structures and algorithms that provides a high productivity, parallel programming infrastructure analogous to the C++ Standard Template Library (STL). In this talk, we provide an overview of the major STAPL components with particular emphasis on graph algorithms. We then present scalability results of real codes using peta scale machines such as IBM BG/Q and Cray. Finally, we present our ideas for future work in this area.

Biography

Lawrence Rauchwerger is the Eppright Professor of Computer Science and Engineering at Texas A&M University and the co-Director of the Parasol Lab. He received an Engineer degree from the Polytechnic Institute Bucharest, a M.S. in Electrical Engineering from Stanford University and a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign. He has held Visiting Faculty positions at the University of Illinois, Bell Labs, IBM T.J. Watson, and INRIA, Paris. Rauchwerger's approach to auto-parallelization, thread-level speculation and parallel code development has influenced industrial products at corporations such as IBM, Intel and Sun. Rauchwerger is an IEEE Fellow, an NSF CAREER award recipient and has chaired various IEEE and ACM conferences, most recently serving as Program Chair of PACT 2016 and PPoPP 2017.

Faculty Contact: Dr. Dilma Da Silva


Software Engineering in Cloud and Mobile Era

Raj Singh
Adjunct Faculty
University of Houston

4:10pm Monday, April 9, 2018
Room 124 HRBB

Abstract

Cloud computing offers many benefits such as agility, extensibility, maintainability, and cost efficiency in computing. However, there are many issues in building software services for cloud including software requirements, architecture and design, testing, quality, performance, scalability, security, and availability. Methodologies used in conventional software application development no longer provide the best fit for building applications hosted on cloud platforms. Software engineering needs to adapt to a continuous delivery model to keep up with the dynamic cloud-computing model. We discuss some of the software engineering challenges and opportunities in the cloud-computing era.

Biography

Raj Singh is an adjunct faculty at University of Houston teaching software design and software engineering courses. He received a BE degree in Electronics Engineering and MBA from Nagpur University, MS in Computer Science from McNeese State University, and Ph.D. in Computer Science from Center for Advanced Computer Studies (CACS), University of Louisiana at Lafayette. His academic experience includes lecturer position at University of Mumbai, instructor positions at McNeese State University and University of Louisiana at Lafayette. Professionally, he has over 18 years of software industry experience with notable success directing a broad range of IT solutions. His research interests are in areas of software engineering, software development methodologies, and biomedical data mining.

Faculty Contact: Dr. Aakash Tyagi


Advancing Health and Wellbeing Using Ubiquitous Computing Technologies

Mi Zhang
Michigan State University

4:10pm Wednesday, April 11, 2018
Room 124 HRBB

Abstract

Health is identified by the National Academy of Engineering as one of the Grand Challenges for Engineering in the 21st century. As an emerging area targeting this challenge, the development of ubiquitous computing technologies for healthcare and wellbeing applications has tremendous potentials to transform existing healthcare system to deliver personalized healthcare seamlessly in our everyday lives. In this talk, I will demonstrate how we unlock the potential of ubiquitous computing technologies to realize the vision of personalized healthcare. First, I will present our work on developing a deep learning-based mobile pill image recognition technology that wins the champion of the NIH Pill Image Recognition Challenge. Second, I will present a collection of intelligent sensing technologies that monitor and analyze health signals and indoor air quality that is highly relevant to human health. Third, I will present our initial efforts on utilizing the power of big data and building smart service systems to tackle mental health problem which is one of the most challenging health problems in our society.

Biography

Mi Zhang is an Assistant Professor in the Departments of Electrical and Computer Engineering, Computer Science and Engineering, and Biomedical Engineering at Michigan State University. He received his Ph.D. from University of Southern California and B.S. from Peking University. Before joining MSU, he was a postdoctoral associate at Cornell University. At MSU, he directs the Intelligent Sensing and Mobile Systems group, developing innovative technologies at the frontier of wearable and mobile sensing systems, embedded deep learning systems, Internet of Things and big data analytics, with a special focus on healthcare applications. A number of his work has been reported and highlighted by NSF, NIH as well as leading national and international media such as TIME, MIT Technology Review, New Scientist, CNN, ABC, Discovery, Smithsonian, Wall Street Journal, The Huffington Post, and The Washington Post. Mi Zhang is the First Prize Winner of the 2016 NIH Pill Image Recognition Challenge, the Third Place Winner of the 2017 NSF Hearables Challenge, the recipient of the 2016 NSF CRII Junior Faculty Award and the Best Paper Award Honorable Mention of 2015 ACM UbiComp conference, and was elected by NIH as the NIH Mobile Health (mHealth) Scholar in 2017.

Faculty Contact: Dr. Roozbeh Jafari


Massively Parallel 3D Model Based Image Reconstruction

Sam Midkiff
Professor of Electrical and Computer Engineering
Purdue University

4:10pm Monday, April 16, 2018
Room 124 HRBB

Abstract

Model based image reconstruction delivers better images with less data, allowing faster throughput on scanning equipment, or lower radiation doses, or fewer photons. The technique is widely applicable to such areas as medical imaging, electron microscopy, line beam data, and security scanning. However, it suffers from extremely high computational costs, taking 1000X times or longer to perform an image reconstruction that more commonly used direct methods like forward back projection. Our techniques have given us speedups of 1600+ over single node reconstructions using the previous state-of-the-art implementation on the Lawrence Berkeley Labs NERSC machine. It is representative of many programs in active use in that large increases in performance and efficiency are possible when an implementation is viewed with a fresh eye, and representative of a class of programs in which standard libraries and compiler techniques are powerless to help.

This work was funded by HPI, LLC and the Department of Homeland Security.

Biography

Samuel Midkiff is a Professor in the School of Electrical and Computer Engineering, Purdue University. He received his PhD in Computer Science from the University of Illinois in 1992. He was at the IBM T. J. Watson Research Center until the Fall of 2002, when he joined the faculty of Purdue. He is a co-founder of HPI, LLC. His research interests are broadly in the area of compiler optimization and high performance computing. His current research is in high performance image reconstruction, the analysis of event driven programs, and programming APIs for multi-node GPU systems.

Faculty Contact: Dr. Lawrence Rauchwerger


Towards Content-Based Essay Scoring

Vincent Ng
Professor
University of Texas at Dallas

4:10pm Wednesday, April 18, 2018
Room 124 HRBB

Abstract

State-of-the-art automated essay scoring engines such as E-rater do not grade essay content, focusing instead on providing diagnostic trait feedback on categories such as grammar, usage, mechanics, style and organization. Content-based essay scoring is very challenging: it requires an understanding of essay content and is beyond the reach of today's automated essay scoring technologies. As a result, content-dependent dimensions of essay quality are largely ignored in existing automated essay scoring research. In this talk, we describe our recent and ongoing efforts on content-based essay scoring, sharing the lessons we learned from automatically scoring two of the arguably most important content-dependent dimensions of persuasive essay quality, thesis clarity and argument persuasiveness.

Biography

Vincent Ng is a Professor in the Computer Science Department at the University of Texas at Dallas. He is also the director of the Machine Learning and Language Processing Laboratory in the Human Language Technology Research Institute at UT Dallas. He obtained his B.S. from Carnegie Mellon University and his Ph.D. from Cornell University. His research is in the area of Natural Language Processing, focusing on the development of machine learning methods for addressing key tasks in information extraction and discourse processing.

Faculty Contact: Dr. Ruihong Huang


Automatic Hierarchical Parallelization of Linear Recurrences

Martin Burtscher
Professor
Texas State University

4:10pm Monday, April 23, 2018
Room 124 HRBB

Abstract

Many important computations from various fields are instances of linear recurrences. Prominent examples include prefix sums in parallel processing and recursive filters in digital signal processing. Later result values depend on earlier result values in recurrences, making it a challenge to compute them in parallel. We present a brand-new work-, space-, and communication-efficient algorithm to compute linear recurrences that is based on Fibonacci numbers, amenable to automatic parallelization, and suitable for GPUS. We implemented our approach in a small compiler that translates recurrences expressed in signature notation into CUDA code. Moreover, we discuss the domain-specific optimizations performed by our compiler to produce state-of-the-art implementations of linear recurrences. Compared to the fastest prior GPU codes, all of which only support certain types of recurrences, our automatically parallelized code performs on par or better in most cases. In fact, for standard prefix sums and single-stage IIR filters, it reaches the throughput of memory copy for large inputs, which cannot be surpassed. On higher-order prefix sums, it performs nearly as well as the fastest handwritten code. On tuple-based prefix sums and 1D recursive filters, it outperforms the fastest preexisting implementations.

Biography

Martin Burtscher is a Professor in the Department of Computer Science at Texas State University. He received the BS/MS degree from ETH Zurich and the PhD degree from the University of Colorado at Boulder. Martin's current research focuses on parallelization of complex programs for GPUS as well as on automatic synthesis of data-compression algorithms. He has co-authored over 100 peer-reviewed scientific publications. Martin is a distinguished member of the ACM and a senior member of the IEEE.

Faculty Contact: Dr. Daniel Jiménez


Geospatial Data Science Techniques for Earth Science Applications

Zhe Jiang
Assistant Professor
University of Alabama

4:10pm Wednesday, April 25, 2018
Room 124 HRBB

Abstract

Geospatial data science is an interdisciplinary field that studies effective and efficient algorithms to identify patterns or make predictions on large spatial data. With the advancement of GPS and remote sensing technology, large amount of geospatial data is being collected at an increasing speed. Examples include earth observation imagery, geo-social media, GPS trajectories, and temporally detailed road networks. Analyzing such rich data assets is already transforming our society in applications such as national water forecasting, disaster response, and crime prevention. However, it also poses unique data science challenges. First, nearby sample locations tend to resemble each other, instead of being statistical independent (also called spatial autocorrelation). Thus, traditional data science methods (e.g., decision trees, random forests) may not perform well on spatial data (e.g., salt-and-pepper noise). Second, the spatial dependency across locations are often non-uniform across different directions (anisotropic), and thus cannot be simply represented as a function of distance. Third, there is often limited ground truth data due to the high costs associated with sending a field crew on the ground. Finally, the large data volume (e.g., terabytes of high-resolution imagery in only one city) requires algorithms to be scalable. In this talk, I would like to introduce our ongoing research that addresses some of the above challenges. Specifically, I will introduce a novel spatial classification model called geographical hidden Markov tree, which models anisotropic spatial dependency in a reverse tree structure within a hidden class layer. I will discuss efficient algorithms for model construction, parameter learning, and class inference. Preliminary results on real world high-resolution earth imagery for flood mapping in Hurricane Mathew and Hurricane Harvey show that our method outperforms several existing methods. I will also discuss several future research directions.

Biography

Dr. Zhe Jiang is currently an assistant professor in the Department of Computer Science at the University of Alabama. He received Ph.D. in computer science from the University of Minnesota in 2016, and B.E. degree from the University of Science and Technology of China in 2010. His research interests include spatial big data analytics, spatial and spatiotemporal data mining, spatial database, geographic information system, as well as their interdisciplinary applications in earth science, transportation, public safety, public health, etc. He has served/serves as reviewers for reputed conferences and journals such as ACM SIGKDD, AAAI, IEEE TKDE, as well as panelist or ad hoc reviewers for NSF. More information, see Dr. Jiang's research page.

Faculty Contact: Dr. Zhangyang (Atlas) Wang


Simulating Natural Environments

Jerry Tessendorf

Professor of Visual Computing
School of Computing
Clemson University

Faculty Fellow of the Hagler Institute for Advanced Study
Texas A&M University

4:10 p.m. Monday, April 30, 2018
Room 124 HRBB

Abstract

Assembling and using computer­ graphic natural environments relies on a very broad collection of ideas and techniques that are integrated into a common system. This "multi­physics" approach uses modular software design, procedural conceptual frameworks, and iterative workflows to provide more than one procedural solution path. The need to improve quality and engage new scenarios drives innovations in individual modules. In this talk we examine the broad scope of the construction of natural environment, module details, improvements over time in response to specific needs in entertainment, engineering, and education.

Biography

Jerry Tessendorf is a Professor of Visual Computing at Clemson University. In 2018, he is a Fellow of the Hagler Institute for Advanced Study, Eminent Scholar, and Visiting Professor in the Visualization Department at Texas A&M University. His research is in fluid dynamics, radiative transfer, volumetric modeling, and production workflow for feature films, games, and engineering. As a Senior Research Scientist and Principal Graphics Scientist, he developed new movie production techniques and software for 15 years at Rhythm & Hues and Cinesite Digital Studios, and received an Academy Award for Technical Achievement in 2008. Prior to that he was Corporate Senior Scientist at Arete Associates. He has a Ph.D. in physics from Brown University.

Faculty Contact: Dr. Nancy Amato


Signal Processing methods for Accent Conversion

Ricardo Gutierrez-Osuna
Professor
Texas A&M University

4:10pm Wednesday, May 2, 2018
Room 124 HRBB

Abstract

Despite years or decades of immersion in a new culture, older second-language (L2) learners typically speak with a so-called ‘‘foreign accent,” sometimes despite concerted efforts at improving pronunciation. A number of studies have suggested that it would be beneficial for such learners to be able to listen to their own voices producing native-accented speech. As a step towards this goal, we are developing speech processing methods to modify the perceived accent of utterances from L2 speakers of English. Our approach consists of decomposing the speech signal of a learner and a teacher into two components: one that carries the speakers’ voice quality and a second one that contains their linguistic gestures. By combining the voice quality of the learner with the linguistic gestures of the teacher, we can then generate a “morphed” voice that is perceived to be like that of the L2 speaker but has the accent of the native speaker. In this talk, I will introduce our signal processing methods for accent conversion, including probabilistic and sparse methods in the acoustic domain and articulatory synthesis techniques in the articulatory domain.

Biography

Ricardo Gutierrez-Osuna received a B.S. degree in Electrical Engineering from the Polytechnic University of Madrid (Spain) in 1992, and M.S. and Ph.D. degrees in Computer Engineering from North Carolina State University in 1995 and 1998, respectively. He is currently a Professor in the Department of Computer Science and Engineering at Texas A&M University. He has broad research interests in speech processing, machine learning, and models of human perception.

Faculty Contact: Dr. Lawrence Rauchwerger