Invited Seminars are special speaking engagements occasionally associated with a department event held later the same day.

Summer 2018


Computer Science and Engineering Special Seminar

Coding and Memory Systems

Image of Armando Solar-Lezama

Prof. A. J. Han Vinck
Full Professor
Digital Communications
University of Duisburg-Essen, Germany

Friday, July 6, 2018
10 a.m. Room 302 H.R. Bright Building


In this talk, we discuss the use of different types of memory systems. Error correcting codes improve the performance with respect to the life time of a memory and special coding techniques are required to optimize storage capacity. We consider this problem from an information theory point of view. We start with the memory with defects and via write-once memory and flash, we end with the phase-change random access memory.


Prof. A. J. Han Vinck has been a full professor in Digital Communications at the University of Duisburg-Essen, Germany, since 1990, specializing in information and communication theory, and coding and network aspects in digital communications.

He has held various positions of professional responsibility, including the Director of the Institute for Experimental Mathematics in Essen, founding Chairman of the IEEE German Information Theory chapter, President of the IEEE Information theory Society (2003) and President of the Shannon, Leibniz and Gauß foundations for the stimulation of research in the field of Information Theory and Digital Communications.

He has received a number of accolades including the election by the IEEE as Fellow for his “Contributions to Coding Techniques”, appointed as Distinguished Lecturer for the Information Theory Society as well as for the Communications Society of the IEEE, the IEEE ISPLC 2006 Achievement award for contributions to Power Line Communications, the SAIEE annual award for the best paper published in the SAIEE Africa Research Journal in 2008 and the 2015 Aaron D. Wyner Distinguished Service Award for longstanding contributions to the IEEE Information Theory society.

Prof. Han Vinck has been instrumental in the organization of research forums including the IEEE Information Theory workshops and symposia Japan-Benelux workshops on Information theory (now Asia-Europe workshop on “Concepts in Information Theory”) and the International Winter School on Coding, Cryptography and Information theory in Europe. He is author of the book “Coding Concepts and Reed-Solomon Codes.”

Faculty Contact: Dr. Anxiao (Andrew) Jiang (

Spring 2016


CSE Industrial Affiliates Program
Distinguished Lecturer Presentation

Synthesis Beyond Automated Programming

Image of Armando Solar-LezamaDr. Armando Solar-Lezama
Associate Professor
Massachusetts Institute of Technology

Thursday, March 3, 2016
4:10p.m., Room 124 H.R. Bright Building



In this talk, I will describe recent advances in our ability to synthesize programs that satisfy a specification, and some of the new applications that have been enabled by these advances. The first part of the talk will focus on the Sketch synthesis system and the algorithms that allow it to scale to challenging synthesis problems. The second part of the talk will focus on applications that go beyond the traditional setup where a programmer writes a specification and the system automatically derives a program. Instead, I will talk about how synthesis can be leveraged as part of larger systems for program optimization, automated tutoring, and even to help develop the synthesizers themselves.


Armando Solar-Lezama is an Associate Professor at MIT where he leads the Computer Aided Programming Group. Before that, he was a graduate student at UC Berkeley, and an undergraduate at Texas A&M. His work focuses on developing new technologies for program synthesis as well as developing new applications of this technology. His work has been published at major venues in a variety of fields including programming languages (PLDI, POPL), formal methods (CAV), computer systems (SOSP), software engineering (ICSE, FSE), databases (SIGMOD), high-performance computing (SC) and machine learning (NIPS).

Faculty Contact: Dilma Da Silva (

Fall 2015


Computer Science and Engineering Special Seminar:

Learning Effective Representation with Electronic Health Records

Image of Fei WangFei Wang
Associate Professor
Department of Computer Science and Engineering
University of Connecticut

Thursday, October 22, 2015
2 p.m., Room 302 H.R. Bright Building


Data-Driven Healthcare (DDH) has aroused considerable interest from various research fields in recent years. Patient Electronic Health Records (EHR) is one of the major carriers for conducting DDH research. There are a lot of challenges on working directly with EHR, such as sparsity, high-dimensionality, and temporality.

In this talk I will introduce my recent works on learning effective representations for EHR including: 1) a grouping scheme to get higher level EHR representations and 2) temporal pattern extraction to explore the event temporalities of EHR. We will show various applications of those techniques including early prediction of the onset risk of chronic diseases and disease progression modeling.


Fei Wang is currently an associate professor in the Department of Computer Science and Engineering at the University of Connecticut (UConn). He also affiliates with the UConn School of Medicine and UConn Center for Health, Intervention and Prevention (CHIP).

Before his current position, he worked at the IBM T.J. Watson Research Center for 4.5 years. His major research interest is data analytics and its applications in biomedical informatics. He published more than 150 papers on top data mining conferences like KDD, ICDM and SDM, as well as medical informatics conferences like AMIA.

His papers have received over 2,700 citations. He won best research paper nomination in ICDM 2010, best student paper award in ICDM 2015, the Marco Romani Best Research Paper Award nomination in AMIA TBI 2014, and his paper was selected into the best paper finalist in SDM 2011 and 2015.

Faculty Contact: Dr. Ben Hu (

Spring 2015


Computer Science and Engineering Special Seminar:

Discussing the Development of the Next-Generation, Interoperable, Federated Cyberinfrastructure

Image of Victor HazlewoodVictor G. Hazlewood
Chief Operating Officer
Joint Institute for Computational Sciences
University of Tennessee

Tuesday, April 7, 2015
2 p.m., Room 302 H.R. Bright Building


Interoperation and federation has become an important component of building cyberinfrastructure (CI). The process of evolving the nation’s CI is already underway in the form of shared and federated infrastructure operated by formal and ad hoc collaborations among researchers and infrastructure owners. Multiple federated systems have grown up via parallel evolution in the GENI, Cloud and Grid communities, for example. These collaborations are in large part operating without the benefit of consensus approaches for integration and federation. As distributed computing, high-performance computing, data-intensive computing and advanced networking are brought together to solve next generation problems it is important to identify cross-community common supporting elements and technologies for future federation and interoperability of the CI. To discuss this topic NSF sponsored a workshop in 2014 that brought together a diverse group of networking and CI community leaders to discuss and identify the challenging problems in the federation and interoperability of existing and future cyberinfrastructures. This talk will discuss the current state of the national cyberinfrastructure as described by the workshop participants, present the results of the NSF sponsored workshop and will provide participants an opportunity to discuss and supplement the workshop results with their input.


Victor Hazlewood is the Chief Operating Officer of the Joint Institute for Computational Sciences (JICS) at the University of Tennessee (UT) with 20 years of experience in High Performance Computing (HPC) in the research community. Victor is currently the Deputy Director of Operations for XSEDE, the US National Cyberinfrastructure project, and PI and co-PI on three NSF awards in networking and cyberinfrastructure. He has worked in a variety of areas of IT and HPC including systems, operations, security, grid middleware and infrastructure and has worked on a variety of HPC systems including the Cray Y-MP system at Texas A&M Supercomputer Center, the first academic TeraFLOP system, Blue Horizon, at SDSC and the first academic PetaFLOP system at UT. He has worked on a variety of cyberinfrastructure projects including NPACI, TeraGrid and XSEDE. Victor has a B.S. in Computer Science from Texas A&M University and is a Certified Information Systems Security Professional.

Faculty Contact: Dr. Daniel Ragsdale (

Fall 2014


Computer Science and Engineering Special Seminar:

Signaling Hypergraphs

T.M. Murali

  Dr. T.M. Murali
  Department of Computer Science
  Virginia Tech

  Monday, December 8, 2014
  9:30 a.m., Room 302 H.R. Bright Building




Cells communicate with each other to perform their functions within the body. When a cell receives an external signal from the environment, it responds with a series of molecular reactions that alters the cell's behavior, e.g., causing it to divide, move, or self-destruct.  These reactions constitute networks called "signaling pathways." Directed graphs are the most common representation of signaling pathways, making them amenable to a wide array of graph-theoretic algorithms. 

In the first part of this talk, I present PathLinker, a graph-based method that can reconstruct the interactions in a signaling pathway from a background protein interaction network given only the identities of the receptors (sources) and transcriptional regulators (targets) in that pathway. PathLinker successfully reconstructs a comprehensive array of human signaling pathways with much higher precision and recall than several state-of-the-art algorithms. We developed PathLinker in an effort to complement and streamline the tedious manual curation of the literature that has been necessary to maintain high-quality databases of signaling pathways.

In the second part of the talk, I discuss an alternative mathematical representation called the "signaling hypergraph."  I illustrate how signaling hypergraphs overcome limitations of graph-based representations. I present a mixed integer linear program to solve the NP-hard shortest hyperpath problem. I apply the algorithm to a well-known signaling pathway, and describe how the shortest hyperpaths better represent signaling reactions than the corresponding shortest paths in graphs.  I conclude with a summary of our ongoing research. Signaling hypergraphs exemplify how careful attention to the underlying biology can drive developments in a largely unexplored field of computer science.


T. M. Murali is an associate professor in the Department of Computer Science at Virginia Tech. He co-directs the ICTAS Center for Systems Biology of Engineered Tissues and is the associate director for the Computational Tissue Engineering interdisciplinary graduate education program. Murali's research group develops phenomenological and predictive models dealing with the function, behaviour, and properties of large-scale molecular interaction networks in the cell. He received his undergraduate degree in computer science from the Indian Institute of Technology, Madras and his Sc. M. and Ph. D. degrees from Brown University. Murali is an ACM Distinguished Scientist. His group won the best paper award at the 2012 ACM Conference on Bioinformatics, Computational Biology, and Biomedicine.

Question & Answer Session in HRBB 302, immediately following Dr. Murali's presentation from 10:30am-11am 

Faculty Contact: Dr. John Keyser (

Spring 2014


CSE Industrial Affiliates Program
Distinguished Lecturer Presentation

HPC Challenges in the Oil & Gas industry

Image of Carlos BonetiDr. Carlos Boneti
Senior HPC Software Engineer
Schlumberger’s Houston Technology Center

Friday, April 25, 2014
10:45 a.m., Room 301 Rudder Tower



Schlumberger is the world’s leading supplier of technology, integrated project management and information solutions to customers working in the oil and gas industry worldwide. Schlumberger provides the industry’s widest range of products and services from exploration through production.

Computer Science challenges are present in almost every product or service developed by Schlumberger. In special, performance is now a limiting factor for many of our technologies. HPC is pervasive.

In this talk, we present a few of the HPC challenges we face in Schlumberger.  In addition, we focus on the problems of how to handle large non-uniform memory, multi-threading, and vectorization, explaining why these problems are now more important than ever.

Furthermore, we argue that education is a key enabler of performance, as the knowledge of performance factors, parallelization and basic computer architecture becomes of increasing importance for every product we develop.


Carlos Boneti is a senior HPC software engineer at the Schlumberger’s Houston Technology Center (HTC) where he focuses on the performance of different SLB projects. He holds a PhD and a DEA in computer architecture and a BSc degree in computer science. Before joining Schlumberger, Carlos collaborated with the Barcelona Supercomputing Center, where most of his research targeted the interactions between multithreaded architectures and the OS for high-performance computing systems. Currently, Carlos is especially interested in applying emerging HPC technologies to solve problems in the Oil & Gas industry.

Faculty Contact: Nancy M. Amato (amato [at]

Summer 2013


Computer Science and Engineering Special Presentation:

Enabling Human-Centered Computing for Active, Purposeful, and Intensive Information Processing

Dr. Luis Francisco-Revilla
Assistant Professor
School of Information
The University of Texas

Tuesday, July 16, 2013
10:30 a.m., Room 302 H.R. Bright Building


In an age inundated with a massive and growing scale of information, the ability to collect and analyze big data is revolutionizing the work of information professionals. However, while Moore's Law and algorithmic advances continue to improve computational capabilities, many critical information analysis tasks depend on having humans-in-the-loop, with human-machine interactions driving subsequent computations. Broadly speaking, people are engaging with information and information systems more actively, purposely, and intensively than ever before. However, Amdahl’s law suggests that overall system performance is increasingly bounded by the limitations of human-machine interfaces rather than by computational capabilities. Human-centered computing seeks to address this fundamental bottleneck by partnering human insight and computational power. In this talk, I present research on new ways to empower people to review, analyze, and draw conclusions from data in information-centered domains. A broad goal of my research is to augment the ability of people to discover, organize, understand and use knowledge from collections and datasets. Toward this end, I describe the design, implementation and evaluation of several human-centered systems I have built. These systems empower users to perform complex tasks in information-rich domains of computational journalism, assistive technology design, and digital libraries and archives.


For 10+ years Dr. Luis Francisco-Revilla has been conducting research in Human-Centered Computing to augment how people use and process information. His research is interdisciplinary and multifaceted, bridging social, informational, and computational aspects. His research work integrates and contributes to human-computer interaction, digital libraries, hypertext, computer supported cooperative work, information retrieval, and recommender systems. Dr. Francisco-Revilla is Co-PI on three international grants from Fundação para a Ciência e a Tecnologia (Portugal’s equivalent to NSF). He is also Co-PI on an IMLS project training tomorrow's faculty in digital libraries. Dr. Francisco-Revilla has published extensively in ACM conferences, especially the ACM/IEEE Joint Conference on Digital Libraries (JCDL), where he served as Program Chair in 2009, as well as the ACM Hypertext Conference. His academic experience includes being a faculty member at the School of Information at the University of Texas at Austin, as well as lecturer in Europe (University of Porto, Portugal) and Latin America (Universidad Iberoamericana, Mexico). His teaching experience is extensive, teaching courses in computer science, information science, and electronic engineering to traditional and non-traditional students.

Staff Contact: Mrs. Kathy Waskom (kwaskom [at]


College of Engineering Special Presentation:

Technology for Health: Current Work and Future Possibilities

Dr. Marjorie Skubic
Professor of Electrical and Computer Engineering
University of Missouri

Wednesday, July 10, 2013
2:30 p.m., Room 2005 Emerging Technologies Building

Reception immediately following in ETB atrium


Dr. Skubic will describe ongoing interdisciplinary research investigating the use of in-home sensor technology and machine learning to detect early signs of illness and functional decline, as a strategy towards proactively managing chronic health conditions. The sensor network includes passive infrared motion sensors, a new hydraulic bed sensor that captures quantitative pulse, respiration, and restlessness while positioned under the mattress, and gait analysis using vision, radar, and the Microsoft Kinect depth camera.

The network is being tested in TigerPlace, an aging in place facility in Columbia, MO, designed to help residents manage illness and impairments and stay as healthy and independent as possible. About 50 sensor networks have been installed in TigerPlace since Fall, 2005, with an average installation time of over 2 years. More recently, sensor networks have been installed in senior housing in Cedar Falls, Iowa. Automated health alerts are generated at both sites and sent to the clinical staff, based on recognized changes in the sensor data patterns. Gait analysis systems have been installed in over 25 senior apartments and are continuously capturing gait through passive observation of residents as they move about the home in their normal activities.

The talk will focus on challenges in signal processing and machine learning for two sensing systems: (1) the hydraulic bed sensor system developed at the University of Missouri and (2) the Microsoft Kinect as used for capturing gait parameters from depth images and tracking fall risk. In addition, Dr. Skubic will discuss possibilities for new directions in technology for managing health.


Marjorie Skubic received her Ph.D. in Computer Science from Texas A&M University, where she specialized in distributed telerobotics and robot programming by demonstration. She is currently a Professor in the Electrical and Computer Engineering Department at the University of Missouri with a joint appointment in Computer Science. In addition to her academic experience, she has spent 14 years working in industry on real-time applications such as data acquisition and automation. Her current research interests include sensory perception, computational intelligence, spatial referencing interfaces, humanrobot interaction, and sensor networks for eldercare.

In 2006, Dr. Skubic established the Center for Eldercare and Rehabilitation Technology at the University of Missouri and serves as the Center Director for this interdisciplinary team. The focus of the center’s work includes monitoring systems for tracking the physical and cognitive health of elderly residents in their homes, logging sensor data and health records in an accessible database, extracting activity and gait patterns, identifying changes in patterns, and generating alerts that flag possible adverse health changes.

Faculty Contact: Dr. Jyh-Charn "Steve" Liu (jcliu [at]

Spring 2010


Computer Science and Engineering Invited Seminar:

Programmable and Configurable Analog Signal Processing

Dr. Paul Hasler
Associate Professor
School of Electrical and Computer Engineering
Georgia Institute of Technology

Friday, March 12, 2010
11:00 a.m., Room 302 HRBB


This talk will present the potential of using Configurable Analog Signal processing techniques for impacting low-power portable applications like imaging, audio processing, and speech recognition. The range of analog signal processing functions available results in many potential opportunities to incorporate these analog signal processing systems with digital signal processing systems for improved overall system performance. Programmable, dense analog techniques enable these approaches, based upon programmable transistor approaches. We show experimental evidence for the factor of 1000 to 10,000 power efficiency improvement for programmable analog signal processing compared to custom digital implementations. Then we will proceed to discussing recent work in configurable analog signal processing centered around Large-Scale Field Programmable Analog Arrays (FPAA) developed at Georgia Tech, including discussions on the resulting novel technology, infrastructure building, and resulting tool flow to use these configurable analog approaches.


Paul Hasler is an Associate Professor in the School of Electrical and Computer Engineering at Georgia Institute of Technology. Dr. Hasler received his M.S. and B.S.E. in Electrical Engineering from Arizona State University in 1991, and received his Ph.D. From California Institute of Technology in Computation and Neural Systems in 1997. His current research interests include low power electronics, mixed-signal system ICs, floating-gate MOS transistors, adaptive information processing systems, "smart" interfaces for sensors, cooperative analog-digital signal processing, device physics related to submicron devices or floating-gate devices, and analog VLSI models of on-chip learning and sensory processing in neurobiology. Dr. Hasler received the NSF CAREER Award in 2001, and the ONR YIP award in 2002. Dr. Hasler received the Paul Raphorst Best Paper Award, IEEE Electron Devices Society, 1997, IEEE CICC best paper award, 2005, Best student paper award, IEEE Ultrasound Symposium, 2006, and IEEE ISCAS Sensors best paper award, 2005. Dr. Hasler is a Senior Member of the IEEE.

Faculty Contact: Dr. Anxiao "Andrew" Jiang (ajiang [at]

Summer 2009


Computer Science Invited Seminar:

Communication Architecture and Memory Interface for Many-Core SoC

Dr. Kiyoung Choi
Department of Electrical Engineering and Computer Science
Seoul National University

Thursday, June 25, 2009
11:00 a.m., Room 302 HRBB


It is now a trend to integrate more and more cores on a system-on-chip to achieve higher performance at lower power consumption and lower design cost, which are not likely to be achieved with just several cores. However, simply adding more cores may not solve the problem but actually can make the problem worse. The communication overhead can degrade the overall system performance and even increase the power consumption. Complicated communication architecture and protocol design may increase the design cost. Parallel programming and partitioning/mapping of the application increase the software design cost, which is one of the hottest issues today. There are many research groups working on the issues and many other related issues in many different directions. This talk will focus on the communication between cores and memory and discuss how to design more efficient communication architecture and memory interface. Specifically, it will present the idea of cascaded bus matrix and how to co-optimize its pipeline architecture together with the floor plan and topology. It will also present two ideas related with network-on-chip design. One is introducing entry control feature into the network-on-chip to reduce power consumption in SDRAM and the other is introducing active memory processor to reduce communication overhead thereby increase the overall system performance.


Kiyoung Choi is a professor of the Department of Electrical Engineering and Computer Science, Seoul National University. He received B.S. degree in electronics engineering from Seoul National University in 1978 and M.S. degree in electrical and electronics engineering from Korea Advanced Institute of Science and Technology in 1980. He received Ph.D. degree in electrical engineering from Stanford University in 1989. He worked for GoldStar Inc. (predecessor of LG) from 1978 to 1983 and for Cadence Design Systems from 1989 to 1991. In 1991, he joined the faculty of the Department of Electronics Engineering, Seoul National University, which has been merged with other departments into the Department of Electrical Engineering and Computer Science. He has coauthored numerous papers and several book/book-chapters in the area of digital systems design. He has served various international conferences including ISLPED, CODES+ISSS, ASP-DAC, and ICCAD as a Program Chair, General Chair, and/or Executive Committee member. He has served on the editorial board of several journals including ACM TODAES and DAEM. His primary research interests are in various aspects of computer-aided electronic systems design. He is also interested in computer/system architecture design and especially in configurable and reconfigurable architecture design.

Faculty Contact: Eun Jung Kim (ejkim [at]

Fall 2009


Computer Science and Engineering Invited Seminar:

Energy in the 21st Century: Building a Brighter Future!

Dr. Najib Abusalbi
Director of University Collaboration

Thursday, September 3, 2009
5:30 p.m., Room 124 HRBB


How long will oil and gas reserves last, and does it matter? Many economists and scientists have made predictions, explored options and written about their theories. Some predict a World War III triggered by competition for oil in ten years, others talk about "bottomless" reserves. And some people say that if we just turn lights off at night, and drive hybrid cars, we'll be fine. Beyond the hype, what are the facts? This talk will cover where energy comes from and where it is consumed, how this balance has been evolving, and what seems to be in store for the world in the next decades, with focus on the market, the technology and the people that will shape this future.


Najib holds a PhD. in Atomic Physics from Louisiana State University. He joined Schlumberger in 1984 after three years in postdoctoral research and teaching assignments in Physics and Chemistry. Since then, he held multiple product development and management positions in several exploration and production domains ­ including information management director, regional operations manager, project director and program manager, as well as recruiting, training and career development manager, innovation and research director, and most recently chief architect for production & operations software in upstream oil & gas. Najib Abusalbi is currently the Director of University Collaboration and is recognized as a Technology Advisor in Schlumberger's Communities of Practice; among his many contributions, he leads the Project Management community since 2005 and acts as a mentor.

Spring 2008


Computer Science Invited Seminar:

Visualizing and Measuring the Success of Fault Prediction Models

Dr. Thomas J. Ostrand
AT&T Labs
Florham Park, New Jersey

Wednesday, February 27, 2008
11:00 a.m., Room 302 HRBB


Software fault prediction has recently become an important topic of software engineering research, and several groups are investigating ways to predict which parts of a system are most likely to contain faults in the future. No one can expect perfect identification of future faults, so it is important to evaluate the relative success of different models and prediction techniques, and to find a common scale by which they can be compared. We will discuss some frequently-used measures and show several ways to visualize the success rate of fault predictions. We'll show how the measures apply to fault predictions that the AT&T research group has made for large systems.


Tom Ostrand is a Principal Member of Technical Staff at AT&T Labs in New Jersey. His research areas are software fault analysis and prediction, software testing, and empirical software engineering. Tom is a member of ACM and ACM-SIGSOFT, and is a past member of the SIGSOFT Executive Committee. He is currently an Associate Editor of the Journal of Empirical Software Engineering, the Program Chair for the Workshop on Predictor Models for Software Engineering (PROMISE), and a member of the Steering Committee of the International Symposium on Software Testing and Analysis (ISSTA). He was formerly in the Computer Science department of Rutgers University, and the software research divisions of Sperry Univac and Siemens Corporate Research.

Faculty Contact: Valerie E. Taylor (taylor [at]

Fall 2008


Computer Science Invited Seminar:

The Role of Molecular Imaging in the Discovery of New Drugs and Therapies

Mark Lenox
Department of Computer Science
University of Tennessee

Monday, October 13, 2008
2:00 p.m., Room 302 HRBB


Molecular imaging involves the measurement of chemical concentrations of specific compounds in living tissue at the molecular level. It provides medical doctors and researchers with important information and enables new insight in the study and management of disease. This new level of information is changing medicine, making it more individualized, and much more effective. We will discuss what molecular imaging is, as well as the technology that enables it and some case studies in research. This includes recent advances in nanotechnology that are enabling new and innovative cancer therapies. Molecular imaging, applied here at TAMU, will help to bring those technologies to fruition.


Mark Lenox has spent most of his career applying computer and software technology to difficult problems. He graduated from Arizona State in 1989 with a BSE in Systems Engineering, then again in 1990 with an MSEE from Texas A&M. After working for a couple of years in Dallas, TX, he took a job working at a small startup company in Knoxville, TN by the name of CTI Molecular Imaging. His first job at CTI involved architecting a new software package for their PET tomographs, which was done in C++. After that, Mark moved over into hardware and developed a new high performance hardware platform using FPGAs to perform digital nuclear pulse processing at very high speed. With these tools in hand, he was made the Lead Engineer and project manager for the High Resolution Research Tomograph. The HRRT was a collaboration between CTI and the Max-Plank Institute of Neurological Imaging, and it represented an all-out effort to build the most powerful imager of the human brain ever devised, and the first to use the new generation of digital electronics. Even though the original plan was to build one unit, the program was very successful and expanded to 17 units due to extreme demand. These were placed at the most prestigious neurological research institutes worldwide to aid their research in diseases such as Stroke, Alzheimers, Parkinsons, and other dementias. With the HRRT program finished, Mark was named Director of New Product Development for the Preclinical Division of CTI Molecular. There he led the development of several new systems designed specifically for research work in the development of new drugs and the study of disease. In 2005, CTI Molecular was sold to Siemens for $1B. Mark has one patent currently granted with several more in the PTO pipeline, and around 20 publications in PET instrumentation with at least that many more in imaging applications. He lives in Knoxville Tennessee with his wife and two children, and is currently working as a consultant while he finishes a PhD in Computer Science at the University of Tennessee.

Faculty Contact: Valerie E. Taylor (taylor [at]