Winter 2019 Schedule

Jan 11, 2019
What is Research?
Towards an Assets-Based Approach to Designing Socio-technical Solutions with Resource Constrained Communities
Sheena Erete – DePaul University

Neighborhoods play a critical role in the social and economic outcome of their residents. Particularly for resource-constrained neighborhoods that face issues such as poverty, unemployment, crime, lack of educational opportunities, and inadequate housing, it is essential to identify and address issues that contribute to decline as evidenced by a lack of public revitalization efforts, local resources, and private investments. As tech designers, it is common to approach work in resource-constrained communities with a deficit view, leading to simplistic and stigmatizing descriptions of these communities that often minimize or ignore institutional infrastructures that breed inequity. In our work, we design with residents in resourced- constrained communities by focusing on their assets, specifically the human, environmental, social, and economic capital that can be leveraged in design. In this talk, I will discuss how we can identify and leverage local assets in resource-constrained communities when designing technologies, practices, and policies. Specifically, I will present two projects in which we take an asset-based approach to understand how neighborhoods use technology to address crime and how we design technologies to support violence prevention in Chicago. Results from these studies provide insight into how to effectively leverage community assets, particularly in historically marginalized communities.

Speaker Bio: Dr. Sheena Erete is an assistant professor in the College of Computing and Digital Media at DePaul University. Her research explores the role of technology and design in addressing social issues such as violence, civic engagement, and STEM education in resource constrained communities in Chicago. She earned a Ph.D. in Technology and Social Behavior (a joint degree in Computer Science and Communication) from Northwestern University and a Masters of Computer Science from Georgia Tech. As an undergraduate, she attended Spelman College, where she studied Mathematics and Computer Science.


Jan 18, 2019
Retinal Imaging and the AI Health Revolution
Amani Fawzi – Northwestern University

The talk will start with a brief summary of Dr. Fawzi’s current research projects in retinal imaging. It will then give an overview of the promise and challenge of the exciting applications of AI in ophthalmology.  This includes approaches for screening disease, health assessments, and the promise to streamline healthcare delivery and telemedicine. Finally, we will discuss an ongoing collaboration between our labs and Drs Raicu and Furst where we will set out to develop AI approaches specifically applied to imaging of age-related macular degeneration subjects. using existing datasets at Northwestern Ophthalmology, we hope to develop an AI model that can predict eyes at high risk for vision loss at an earlier stage of the disease process. This model will be critical to improve patient outcomes by  identifying patients who should be monitored more closely allowing clinicians to implement early interventions. Success of this approach would also be critical for identifying and targeting these high-risk subjects for future preventative clinical trials.

Speaker Bio: Dr. Fawzi is a vitreoretinal surgeon and clinician-scientist, Professor in the Department of Ophthalmology. She divides her time between her clinical/surgical practice and her NIH-Funded research at Northwestern University.

At Northwestern University, Dr. Fawzi runs an active NIH-funded translational research laboratory. Her lab studies animal models of ischemic retinopathies and her clinical research focuses on novel functional retinal imaging approaches including OCT angiography, visible-light OCT and hyperspectral imaging. Recognized for her imaging research, Dr. Fawzi serves on the Editorial Boards of Scientific Reports (Nature), Retina and Investigative Ophthalmology and Visual Sciences, as well as serving on several NIH study sections. She has authored/coauthored over 160 peer-reviewed articles, has delivered several named Lectureships and has been elected as member of the Retina and Macula Societies. She has received the Honor Award of the American Society of Retina Specialists and the Achievement Award of the American Academy of Ophthalmology.


Jan 25, 2019
Weird Fun and Normal Fun: Designing from Play-Style in The Parasite ARG
Peter McDonald – DePaul University
In 2017, the University of Chicago launched an Alternate Reality Game (ARG) called The Parasite for all 1800 incoming undergraduate students. Peter McDonald will be discussing how the idea of play-style guided the design of pathways into the game, and how play-style should be re-conceptualized in order to make it a more robust tool for researching player behaviors.

Speaker Bio: Peter McDonald researches playfulness, designs games, and is on the lookout for new ways to play. He currently holds the position of assistant professor at DePaul University and completed his Ph.D. at the University of Chicago in 2018. His dissertation, “Playfulness 1947-2017” explores the connections between mid-century art games and the design of contemporary video games. Peter’s research focuses on the ways that players make sense of and interpret games. Sometimes that means looking closely at the patterns of rhythm and rhyme of the songs that accompany children’s ball games, sometimes it means examining game controllers as semiotic systems. His work has appeared in Games & Culture, The American Journal of Play, and Analog Game Studies, among other publications. As a game designer, Peter is fascinated by large-scale and pervasive forms of play, particularly Alternate Reality Games. While at the University of Chicago, he worked on several large-scale games with funding from the MacArthur Foundation and the NSF, including The Project, The Source, SEED, and The Parasite. These games involved hundreds of players exploring elaborately staged worlds across the south side of Chicago and online. He finds these games exciting, because they offer an invitation to a whole community and explore utopian alternatives to everyday life.


Feb 1, 2019
Visual Texture Analysis: From Similarity to Material Properties
Thrasyvoulos N. Pappas – Northwestern University
Texture is an important visual attribute for both human perception and
image analysis.  It provides important clues for object shape and boundary detection, as well as material identification.  Our research has initially focused on texture similarity, which is important for a variety of applications, including image and video quality and compression and content-based retrieval.  We have proposed a new class of structural texture similarity metrics (STSIMs) that account for human visual perception and the stochastic nature of textures.  They rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are similar or essentially identical.  We have also developed new testing procedures for objective texture similarity metrics.  We have identified three operating domains for evaluating the performance of such metrics, each with different performance goals and testing procedures.  We have also proposed ViSiProG (Visual Similarity by Progressive Grouping), a new procedure for collecting subjective similarity data.

Our current focus is on material identification and the extraction of material properties.  Understanding material properties from visual texture is important for a variety of applications including surveillance and security, environmental monitoring, forestry and agriculture, product quality, health, cosmetics, and virtual reality.

Speaker Bio: Thrasos Pappas received the Ph.D. degree in electrical engineering and computer science from MIT in 1987.  From 1987 until 1999, he was a Member of the Technical Staff at Bell Laboratories, Murray Hill, NJ.
He joined the EECS Department at Northwestern in 1999.  His research interests are in human perception and electronic media, and in particular, image and video quality and compression, image and video analysis, content-based retrieval, medical image analysis, model-based halftoning, tactile and multimodal interfaces. Prof. Pappas is a Fellow of the IEEE, SPIE and IS&T.  He has served as Vice President-Publications (2015-17) for the Signal Processing Society of IEEE, Editor-in-Chief of the IEEE Transactions on Image Processing (2010-12), elected member of the Board of Governors of the Signal Processing Society of IEEE (2004-07), chair of the IEEE Image and Multidimensional Signal Processing Technical Committee (2002-03), and technical program co-chair of ICIP-01 and ICIP-09.  From 1997 to 2018, he has served as co-chair of the SPIE/IS&T Conference on Human Vision and Electronic Imaging.  He is currently one of the two founding Editors-in-Chief of the IS&T Journal of Perceptual Imaging.


Feb 8, 2019
Advancing Energy Testing of Android
Reyhaneh Jabbarvand (UC Irvine)
The utility of a smartphone is limited by its battery capacity and the ability of its hardware and software to efficiently use the device’s battery. To properly characterize the energy consumption of an app and identify energy defects, it is critical that apps are properly tested, i.e., analyzed dynamically to assess the app’s energy properties. However, currently there is a lack of testing tools for evaluating the energy properties of apps. As a result, for energy testing, developers are relying on tests intended for evaluating the functional correctness of apps. Are such tests adequate for revealing energy defects in apps? If not, what are the properties of tests that can effectively find energy inefficiencies in apps? How can we automatically generate such tests? Answers to these questions are the subject of my presentation.

In the first part of this talk, I will introduce μDroid, a mutation testing technique that can be used by developers to assess the adequacy of their test suite for revealing energy-related defects. Applying μDroid to real-world Android apps with available test suites showed that current Android testing tools are in fact ineffective at finding energy defects. Based on the insights from this study, I identified characteristics of tests that can effectively find energy issues in Android apps. In the second part of this talk, I will present COBWEB, a search-based energy testing technique that automatically generates energy tests. Experimental results on real-world Android apps demonstrate not only COBWEB’s ability to effectively and efficiently test energy behavior of apps, but also its superiority over prior techniques by finding a wider and more diverse set of energy defects.

Speaker Bio: Reyhaneh Jabbarvand is a PhD candidate in the Donald Bren School of Information and Computer Sciences at the University of California, Irvine (UCI). Her research interests include analysis and testing of mobile apps to address security and energy issues. She has been awarded the Google PhD Fellowship in Programing Technology and Software Engineering for her work on advancing energy testing of Android. She is the lead author of several publications that have appeared in top software engineering venues, including ICSE, ESEC/FSE, and ISSTA. More info about her can be found at:


Feb 15, 2019
Low-Cost Information Extraction through Human-Computer Partnership
Roselyne Tchoua (University of Chicago)

Instead of empirical and theoretical methods, scientists are now turning to machine learning algorithms to make sense of huge data sets. Despite great promises, such methods are only as good as the data available for training, and thus they typically require large, high-quality, machine-accessible data sets. Unfortunately, in many domains, important data is locked away in scientific articles and significant, expensive effort is required to retrieve them. In this talk, I will describe how fully automated (“unsupervised”) methods can be combined with carefully targeted human effort to achieve results comparable to state-of-the-art information extraction software at a fraction of the cost. Specifically, I will describe polyNER, a hybrid human-computer system for extracting polymer names from text. To circumvent the need for a large annotated corpus, polyNER uses an ensemble of word embedding models and limited domain-specific knowledge to propose candidate entities. These candidate entities can then be labeled by experts, a task that is much easier than reading and recognizing the entities in documents. PolyNER uses these labels to train semi-supervised named entity word vector classifiers that can then be used to automatically identify polymer names in text. To reduce human effort and optimize accuracy, we apply expert-in-the-loop active learning methods to carefully select which candidates should be labeled. Our preliminary results are comparable to results extracted by a state-of-the-art, domain-specific information extraction toolkit, yet require only minimal human input. Application of polyNER to an orthogonal problem, extracting dataset references from social science literature, demonstrates the generalizability of our proposed methodology. This work highlights the potential of adequately capturing domain knowledge to develop and apply low-cost and high accuracy machine learning solutions to real world problems.

Speaker Bio: Roselyne Tchoua is a PhD candidate at the Department of Computer Science at the University of Chicago. Her research focuses on exploring methods to combine human and machine approaches to a variety of scientific information extraction problems. Specifically, she works with collaborators in the Institute of Molecular Engineering and The National Institute of Standards and Technology, to extract polymer names and properties from scientific literature. Before coming to Chicago, she was a scientist at Oak Ridge National Laboratory in the Scientific Data Group, where she worked on a number of projects at the intersection of computer science and other science domains. She received her Bachelor and Master of Science degrees in Electrical Engineering at the University of Tennessee at Knoxville in 2004 and 2006, respectively.


Feb 22, 2019
Synthetic Nervous Systems for Legged Robotics
Nicholas Szczecinski –  University of Cologne
Mobile robots have the potential to revolutionize fields as diverse as agriculture, emergency response, and extraplanetary exploration. Legged robots, in particular, could enable access to and transportation over previously inaccessible terrains. Legs are an inherently biological mobility solution, so it is reasonable to look to animals for inspiration. In particular, insects are highly-evolved walking machines. With the advent of optogenetics and other genetic tools for manipulating the nervous system, the opportunity for learning about nervous systems and applying that knowledge to robots has never been greater.

My primary research goal is to use physical and simulated robotic systems to test neurobiological knowledge uncovered by my biologist collaborators. My secondary goal is to use these models as the basis for legged robot controllers that may be more adaptable and scalable than the state of the art. My research uses biological data to construct bottom-up computational neuroscience models of insect nervous systems (termed “Synthetic Nervous Systems,” or SNS), and applies these models as legged robot controllers. The goal is to create a dynamical, transparent control system for robot locomotion and behavior, whose structure comes from biological data and is tuned to perform a useful function. In this talk, I will describe the progression of this research, note some offshoot and collaborative projects, and discuss industrial applications of this technology.

Speaker Bio: Nick is a research associate in the Biologically Inspired Robotics Laboratory at Case Western Reserve University in Cleveland, Ohio. There he manages a team of students with backgrounds in engineering, computational neuroscience, and animal neuroscience. Prior to this position, he was a postdoctoral scholar in Prof. Ansgar Bueschges’ neurobiology laboratory at the University of Cologne in Cologne, Germany. Nick got his Ph.D. (2017), M.S. (2013), and B.S. (2012) in Mechanical Engineering from Case Western Reserve University. He has published 32 peer-reviewed journal and conference papers, and has presented at 12 national and international conferences. Nick maintains several interdisciplinary research collaborations in Europe and North America, and is looking forward to applying what is known about animals’ nervous systems to build practical robots.


Mar 1, 2019
PLI+: An Efficient Write-optimized Index
Dai-Hai Ton-That – DePaul University
The emergence of personal devices has led to explosive growth of data. Existing indexing methods in DBMSes are inefficient to handle large incoming volumes of data.

In this talk I will first present an overview of traditional indexing techniques such as B-trees. I will then present the recently proposed PLI+, a write-optimized index to support massive data inserts. PLI+ relies on internal knowledge of data layout; it builds a physical location index, which maps a range of physical co-locations with a range of attribute values to create approximately sorted buckets.  As new data is inserted, writes are partitioned in memory based on incoming data distribution. The data is written to physical locations on disk in block-based partitions to favor large granularity I/O. In the talk, I will present the design and analysis of PLI+, and experimental results of its performance on real data sets on hard disks and Flash drives. Finally, I will provide an overview of other research directions, covering topics related to the database kernel, privacy-preserving data management and reproducibility of scientific software.

Speaker Bio:  Currently, Dai-Hai Ton-That is a postdoctoral researcher at the College of Computing and Digital Media (CDM) at Depaul University. He received his PhD in Computer science at PRISM laboratory in University of Versailles Saint-Quentin, Paris Saclay University, France in January 2016. In 2008, he got his first degree in Computer Science and Engineering in Ho Chi Minh City University of Technology, Vietnam (HCMUT). In the duration of 2008-2011, he was a researcher and lecturer in HCMUT where he also obtained his Master in Computer Science. Recently, he did his first postdoc in CEA at CEA-LIST Institute, CEA-Saclay, Paris Saclay University, France. His main research interests lie in reproducibility, big data techniques in biology, indexing for spatial-temporal trajectory on Flash devices and privacy preserving in participatory sensing systems.


Mar 8, 2019
Eli Brown – DePaul University

Working with data has become a critical skill for today’s workforce as the cohort of businesses and organizations looking to cash in on the promise of big data grows.  When people with expertise in another domain work to understand data science they put new tools to use and develop their organizations, but they may still fail to reach the promise of the big data revolution. One reason is the unfair expectation that someone with a lifetime of expertise in another subject should be able quickly to master data science, a field that requires a lifetime to master in its own right. Should a journalist be expected to properly tune a regression analysis? Must a biologist’s workflow require days of adjusting parameters hoping for an interpretable outcome? We should instead be creating tools that enable the expert to work with data visually in a domain-intuitive way and apply machine learning automatically, learning from their interactions.

The technology required to do this must meld the best of human and machine intelligence. Current workflows may use machine learning or data visualization in isolation, or may clumsily combine them in sequences. While visualization can help someone with expertise in their data domain gain insight quickly, relying only on it alone means missing out on the automated modeling provided by machine learning.  Conversely, relying on machine learning means missing out on deeper insight by relying on mechanisms that produce black-box models from raw data, evaluated through a narrow lens. Building combined, integrated technology requires careful consideration throughout the feedback loop between human and machine. The visual interface must match the domain’s expectations, it must also collect information from the user that is useful to a machine learning algorithm, and the algorithm must be able to quickly update a model so the results of user interactions can be reflected in a way that is helpful to the user. In this way, we enable the domain expert to focus on their own domain while gaining the advantage of machine learning that works for them in the background.

In this talk, I discuss my work on these interactive machine learning (IML) systems, with domain collaborators including in biotechnology, medical informatics and journalism.  In addition, I will explain our efforts to facilitate broader adoption of this technology through developing a platform for building IML systems.

Speaker Bio: Eli T. Brown is an Assistant Professor in the DePaul University College of Computing and Digital Media, where he directs the Laboratory for Interactive Human-Computer Analytics ( and participates in the Medical Informatics Lab (MedIX). He teaches courses in data visualization and machine learning, and his research is focused on their intersection: human-in-the-loop analytics. Building systems that make use of human and machine analytic capabilities, LIHCA works to solve problems with collaborators in domains including journalism, biotechnology, and medical informatics. They have also released a platform to help others build these interactive machine learning systems ( Recently, he has begun studying data science practice, particularly with regard to improving general tools for interactive machine learning (IML) and understanding how analysts work with uncertainty.


Mar 15, 2019
I’m Not Kidding!: Chilfree by Choice – Research and Reflections
Shruthi Manjula Balakrishna – Gensler
“Pronatalism” means “pro” – “natal” or “pro-baby.” It is the idea that parenthood and raising children should be the central focus of every person’s adult life. The book, Pronatalism: The Myth of Mom and Apple Pie, defines pronatalism as “…an attitude or policy that is pro-birth, that encourages reproduction, that exalts the role of parenthood.” Pronatalism is a strong social force that glorifies parenthood and includes a collection or beliefs deeply embedded that they have come to be seen as “true.” Thanks to celebrities and the media, pregnancy and the raising of children is glamorized like no other time in the history.

For some people, there might be nothing more fulfilling than bringing a child into this world and raising it. Having said that, it isn’t necessarily true that parenthood is the right choice for everyone. And that is the problem with pronatalism – it leads everyone to believe that they should have children. It also leads people to believe that they have the right to have as many children as they want. It’s time to take another hard look at pronatalism and it’s assumptions. This research project and comic series is the manifesto to ignite a transition into a society that can respect and support true reproductive freedom and choice.

I picked a humous approach for design and vocabulary to open the stage for parents, childfree, and the childless. The comic series urges people considering the childfree lifestyle to refuse to “follow the pack” because the society expects us to conform to tradition. And to carefully consider the long-term implications of creating new lives and subsequently deciding against is empowering. To be clear, this project is not against people who choose to become parents. I am only trying to take a closer look at the pronatal situation at hand in order to see the truth about parenthood and reproduction. It’s time we realize that either choice is equally legit and is equally acceptable and respectable. There’s no reason to think otherwise.

Speaker Bio: Shruthi is the legitimate love-child of strategy and creativity. With more than 10 years of advertising experience, Shruthi’s background is as varied and diverse as the creative she produces. She had a traditional design education from the Art Institute of Colorado and she brings together a variety of creative disciplines and experience – by which she is able to supply not merely design, but rather a crafted product which feels together, connected and harmonized via one creative approach which bleeds the same blood throughout the elements.
Her design sensibilities are heavily inspired by traditional print design and a deep love for diverse cultural history. She has worked with with a variety of international, national and local brands, such as Google, Seiko Watch Corporation, Nolte Germany, AIGA, KOA, Jenny Craig, Which Which, Lyra Health, Man Therapy, Odell Brewing and The Colorado Lottery. She is currently working as a Senior Brand Designer at Gensler and is pursuing her MFA in Graphic Design at Vermont College of Fine Arts.