Winter 2017 Schedule

1/6/2017 – Taihua Li – DePaul University
Recommender Systems to Support  Brokering of Youth Learning Opportunities

Abstract: Recent research examining learning in informal environments reflects a growing recognition of the important role adults play as learning brokers by identifying and orchestrating connections to learning opportunities such as access to people, spaces, programs, and information sources. In order to broker learning, educators need to have knowledge about youth as well as potential opportunities. This talk is intended to propose for the design and development of a recommender system to support educators in brokering learning that makes use of log data from an online social learning network.

Biography: Taihua Li is a second-year Master of Science in Predictive Analytics student at DePaul University. He holds two Bachelor of Arts degrees from Ripon College in Economics and Business Management. Taihua is a research assistant at the Technology for Social Good Research and Design Lab supporting the study of learning analytics with the Digital Youth Network.

1/13/2017 – Owen Schaffer – DePaul University
What Makes Games Fun? Card Sorting to Investigate Sources of Computer Game Enjoyment

Abstract: Understanding what makes games fun is not just important for game designers, but for anybody interested in creating enjoyable experiences. Computer and video games are obstacles people choose to overcome for the enjoyment they provide. Playing games takes time and effort. So, why do people play them? What makes games so enjoyable that people want to play them?

Come join us for this participatory workshop where we will try to answer this question with a fun and easy activity. (The activity will be guided step-by-step, so no previous knowledge or experience is required.)  Following the participatory activity, many theories of enjoyment will be explored. The present study will be presented, a card sorting study with 60 participants sorting 167 sources of enjoyment into categories to develop a new model of computer game enjoyment.

Biography: Owen Schaffer is a doctoral student studying HCI and IS at DePaul University’s College of Computing and Digital Media (CDM). Owen studies flow and enjoyment in games with Professor Xiaowen Fang. Owen received his MA in Positive Organizational Psychology and Evaluation at Claremont Graduate University with Professor Mihaly Csikszentmihalyi. Owen Schaffer has worked as a user researcher improving websites and games through research both as an external consultant for Human Factors International and as an internal researcher for companies such as Ubisoft and Kaiser Permanente. He has also taught user experience training courses for Human Factors International, and has passed their exams to become both a Certified Usability Analyst and a Certified User Experience Analyst.

1/20/2017 – Fatemeh Vahedian – DePaul University
A Multi-Relational Recommender System Framework for Heterogeneous Information Networks

Abstract: Recommender systems are the essential tools in information overloaded web where personalization is sought. Traditional recommendations, however, are mainly built based on two-dimensional relations in which the interaction of user and items is represented as a single relation. This relation among user and item entities can be implicit or explicit depending on the application. In real world cases of recommender system applications, such as social networks, several dimension of entities can be imagined which are connected via complex heterogeneous relations.
The goal of our work is to integrate the multi-dimensionality of data captured in a complex network to improve the recommender system accuracy. In this talk we introduce a framework for recommender systems in heterogeneous information networks combining multi-relational information using extended paths. In this work, we explore challenges and problems of extended multi-relational recommendation in two different recommendation models.

Biography: Fatemeh Vahedian is a PhD candidate at DePaul University. She received her Master’s in Information Technology and her Bachelor’s in Computer Science. She is currently a research assistant working in the Web Intelligence Lab under supervision of Prof. Burke and Prof. Mobasher. Her research revolves mostly around the areas of Multi-relational Recommender Systems and Heterogeneous Social Network Analysis. She has published her work in several conferences and journals such as RecSys, UMAP, FLAIRS, ACM Transaction on the Web.

1/27/2017 – Rafael Tenorio, PhD – DePaul University
Economic Behavior and Incentive Provision in BitTorrent Communities: A Look from Within

Abstract: Peer to peer (P2P) file sharing has become a significant way of distributing digital content on the Internet. From the early days of the music sharing site Napster to the later emergence of giant multi-content sharing sites like ThePirateBay, the P2P file sharing economy has experienced tremendous growth in the last 15 years. Among file sharing models, the BitTorrent protocol has emerged as the clear market leader. At the end of 2015, BitTorrent was the highest traffic Internet upstream – and fourth overall- application in North America, only behind Netflix, YouTube, and traditional HTTP transfers. In this project we exploit access to unique opportunity of accessing the control panel of two BitTorrent communities to (a) understand some of the intricate aspects of user behavior in these communities, and (b) analyze the sensitivity of user behavior to changes in sharing parameters using randomized field experiments. Our preliminary findings suggest that the sense of community and reputation building in these BitTorrent sites supersedes any material incentives that users may receive to increase their site contributions.

Biography: Rafael Tenorio was born in Lima, Peru, and received his B.A. in Economics from the University of Lima prior coming to the U.S. and completing a PhD in Economics from Johns Hopkins University. He has been a Professor of Economics at DePaul since 2003, and has also held positions at the University of Notre Dame, the World Bank, and Central Bank of Peru. His main area of interest is Applied Game Theory, with special emphasis on auctions and other allocation mechanisms in traditional and online environments. His research combines theoretical modeling, econometric analysis, and experimental methods to gain insight on the behavior of agents in various real-life markets and institutions.

2/3/2017 – Dai-Hai Ton-That, PhD – Paris Saclay University, France
Efficient Indexing Techniques for Spatio-Temporal Data on Mobile Devices

Abstract: The advances in the development of mobile devices, as well as embedded sensors have permitted an unprecedented number of services to the user. At the same time, most mobile devices generate, store and communicate a large amount of personal information continuously. While managing personal information on the mobile devices is still a big challenge, whether on account of the inherent constraints of these devices; or towards the safe and secure access and share of these information. This talk addresses these challenges while focusing on the location traces. In particular, we propose an efficient indexing technique for spatio-temporal data (TRIFL) designed for flash storage. Besides, in order to protect the user’s personal mobility traces, a distributed architecture and a privacy-aware protocol for participatory sensing, have been proposed in PAMPAS. PAMPAS relies on secure hardware solutions for distributed computing of spatio-temporal aggregates on the collected private data.

Biography: Dai-Hai Ton-That received his PhD in Computer science at PRISM laboratory in University of Versailles Saint-Quentin, Paris Saclay University, France in January 2016. In 2008, he got his first degree in Computer Science and Engineering in Ho Chi Minh City University of Technology, Vietnam (HCMUT). In the duration of 2008-2011, he was a researcher and lecturer in HCMUT where he also obtained his Master in Computer Science. Recently, he did his first postdoctoral in CEA at CEA-LIST Institute, CEA-Saclay, Paris Saclay University, France. His main research interests lie in big data techniques in biology, indexing for spatial-temporal trajectory on Flash devices and privacy preserving in participatory sensing systems.

2/10/2017 – Olayele Adelakun, PhD – DePaul University
Innovation, Education and Research at the iD-Lab

Abstract:  Founded in 2016 by Dr. Olayele Adelakun, the DePaul University College of Computing and Digital Media’s Innovation Development Lab (iD-Lab) was formed to serve as a model for University corporate partnerships in the area of technology innovation.  The iD-Lab was developed based on lessons learned from research on technology clusters with a focus on Silicon Valley.  Information Technology companies operating outside of technology clusters like Silicon Valley tend to be at a disadvantage when it comes to innovation. One of the key success factors for companies in IT clusters is the their strong relationships with local Universities.  As Chicago continues to invest in sites such as 1871, it is important to consider ways to build bridges between companies and Universities in the area of innovation.  The iD-lab was developed to serve as a space that could build the bridges between DePaul and companies to grow into a leading technology innovation and research hub.  By developing partnerships with companies such as AllState, Bosch, CareerBuilder, HERE and ATKearney the iD-Lab is growing into a unique center within the University. Current work in the lab focuses on three primary focus areas (1) development of technology innovations projects with member companies; (2) education through training, workshops and practical experience (http://ilab.innovation.depaul.dryele.com/), and (3) research on technology innovation. http://www.depaulidlab.com. Current work, research and progress in the lab will be discussed.

Biography: Dr. Olayele Adelakun Olayele Adelakun is an Associate professor of MIS at DePaul University Chicago, Illinois, College of Computing and Digital Media (CDM). His research focuses on IT Outsourcing, ERP systems implementation, and Information Systems Quality, IT Evaluation and Emotional Intelligence among IT Leaders. He has conducted studies in both medium size companies and large multinational companies in Europe, Africa and the United States. He has chair several academic and industry focus conferences. He started the study abroad program in the school CDM in 1994. He has also lead several executives’ presentations. He has published over eighty articles in conferences, books and journals. He holds an M.S. in Information Processing Science from University of Oulu, Oulu, Finland and Ph.D. in Information Systems from the Turku School of Economics and Business Administration, Turku, Finland.  More information about the lab can be found at http://www.depaulidlab.com.

2/17/2017 – Shiyi Wei, PhD – Virginia Tech / University of Maryland
Towards Practical Program Analysis: Introspection and Adaptation

Abstract: Software is ubiquitous. As its importance grows, the mistakes made by programmers have an increasingly negative effect, leading to critical failures and security exploits. As software complexity and diversity grows, such negative effects become even more likely. Automated program analysis has the potential to help. A program analysis tool approximates possible executions of a program, and thereby can discover otherwise hard-to-find errors. However, significant challenges must still be overcome to make program analysis tools practical for real-world software.

I have gained substantial experience in building novel program analysis tools whose aim is to produce more secure and reliable software. Recently, I have focused on the challenge of building analysis tools that perform well (i.e., can analyze realistic code in a reasonable amount of time) and are precise (i.e., do not produce too many “false alarms”). To this end, I have developed an approach that systematically uncovers sources of imprecision and performance bottlenecks in program analysis. The goal is to significantly reduce the time-consuming manual effort otherwise required during analysis design process. In addition, I have designed an adaptive analysis, in which appropriate techniques are selected based on the coding styles of the target programs. Selection is based on heuristics derived from a machine learning algorithm. The idea is that precise techniques can be deployed only as where and when they are needed, leading to a better balance overall.

Biography: Shiyi Wei is a post-doctoral associate at University of Maryland, College Park. He obtained his Ph.D. in Computer Science from Virginia Tech in 2015, and B.E. in Software Engineering from Shanghai Jiao Tong University in 2009. His research interests span the areas of Programming Languages, Software Engineering and Security. The goal of his research is to make program analysis practical for improving the security and reliability of real-world software. He has published articles at top venues in his areas of interest, such as FSE, ECOOP, and ISSTA. He has interned at IBM T. J. Watson Research Center.

2/24/2017 – Motahareh (Sara) Bahrami, PhD – Wichita State University
Effective Assignment and Assistance to Software Developers and Reviewers

Abstract: Software products are constantly growing in terms of size, complexity, and application domains, among other things. It is not uncommon in large open source projects to receive several bug reports and new feature requests daily. These change requests need to be effectively triaged and resolved in an efficient and effective manner to sustain (or even retain) the viability of the product in the marketplace – not a trivial task by any means. The development units, i.e., individuals and teams, need to perform several tasks such as validating the change requests, assigning them to the developers(s), implement the necessary changes to the source code, review the code changes, and then assembling them into a (new) release to the user base. Human reliance and dominance are ubiquitous in sustaining a high-quality large software system. Automatically assigning the right solution providers to the maintenance task at hand is arguably as important as providing the right tool support for it, especially in the far too commonly found state of inadequate or obsolete documentation of large-scale software systems.

In this talk, several maintenance tasks related to assignment and assistance to software developers and reviewers are addressed, and multiple solutions are proposed. The key insight behind these proposed solutions is the analysis and use of micro-levels of human to code and human to human interactions. The formulated methodology consists of the restrained use of machine learning techniques, lightweight source code analysis, and mathematical quantification of different markers of developer and reviewer expertise from these micro- interactions.

Biography: Motahareh (Sara) Bahrami received the BSc and MSc degrees from Iran in 2006 and 2010, respectively. She is currently working toward the Ph.D. degree and is a member of the Software Engineering Research Laboratory (SERL) in the Department of Electrical Engineering and Computer Science at Wichita State University under the supervision of Dr. Huzefa Kagdi. Her primary research work is in the area of software evolution/maintenance and empirical software engineering. The focus of her Ph.D. dissertation is on developing automated approaches that utilize information stored in software repositories to support the evolution of large-scale software systems. Mainly, she is using machine learning and data mining techniques, natural language processing, lightweight source code analysis, and mathematical quantification to conduct her research.  The results of her research are published in the IEEE Transactions on Software Engineering (TSE) – also presented at the IEEE/ACM International Conference on Software Engineering (ICSE 2016) and the ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE 2016) – and in the ACM/IEEE Working Conference on Mining Software Repositories (MSR 2014 and MSR 2015).

After completing her undergraduate degree in Computer Engineering, she worked in industry for two years, where she carried out various assignments, ranging from Software Engineer to Database Administrator. She also has three years of experience, teaching several undergraduate Computer Science courses in her home country Iran. Sara is a recipient of several awards at Wichita State University, including the Wallace Graduate Student Research Award from College of Engineering, Dora Wallace Outstanding Doctoral-Level Student Award, and Maha “Maggie” Sawan Fellowship for Graduate Students.

3/3/2017 @ 1:00 (First of two talks this day)
Zonghua Gu, PhD – University of Michigan
Analysis and Optimization of Resource-Constrained Real-Time Embedded and Cyber-Physical Systems

Abstract: My research addresses challenges in design, analysis, and implementation of today’s complex real-time and embedded systems, especially focusing on issues related to real-time and hardware resource constraints. Mass-produced consumer products such as cars are very cost-sensitive, since even a small decrease in Bill-of-Materials cost may lead to large savings to the product vendor. As a result, embedded developers tend to choose the cheapest hardware components that can satisfy performance/timing constraints, and have to cope with very limited CPU computing power and memory size during application development. My research aims to provide a set of algorithms and techniques for real-time analysis and design optimization of resource-constrained embedded systems. I will highlight some of my recent work, including analysis and optimization of Mixed-Criticality Systems that integrate multiple applications with multiple levels of criticality to meet multiple levels of safety certification requirements; stack memory size reduction in multitasking systems with preemption threshold scheduling; optimized allocation and scheduling of applications to multicore processors with formal methods such as SAT modulo theories; and optimized allocation of data variables to on-chip scratchpad memory. In recent years, the term “embedded system” is undergoing an identity crisis, as it is gradually being overshadowed and replaced by latest buzzwords such as Internet of Things (IoT) and Cyber-Physical Systems (CPS). The broadened scope makes this research field increasingly diverse and inter-disciplinary, going much beyond the traditional, narrow view of embedded programming on small microcontrollers. I will provide an outlook on potential future research directions in the broad context of IoT and CPS, including high-performance embedded computing on heterogeneous platforms; implementation of artificial intelligence and machine learning algorithms in embedded systems, and integrated design of embedded software interacting with the physical environment.

Biography: Zonghua Gu received his Ph.D. degree in Computer Science and Engineering from the University of Michigan at Ann Arbor in 2004. He is currently an associate professor at Zhejiang University, China, and a visiting professor at the Technical University of Munich, Germany. His research area is real-time embedded and cyber-physical systems.

3/3/2017 @ 2:30 (Second of two talks this day)
Sheng Li, PhD – Northeastern University
Robust Representations for Data Analytics under Uncertainty

Abstract: High-dimensional data are ubiquitous in real-world applications, arising in images, videos, documents, online transactions, biomedical measurements, etc. Although data analytics in high-dimensional space is generally intractable due to the “curse of dimensionality”, significant progress has been made by exploiting the low-dimensional manifolds in high-dimensional data. Extracting effective and compact feature representations from high-dimensional data becomes a critical problem in data science and machine learning. Traditional data analytics methods, especially the statistical models, often make strong assumptions on the data distributions. However, real-world data might be contaminated by noise, or captured from multiple views. Such uncertainty would hinder the performance of data analytics. In this talk, I will describe some examples of my work in advancing the robust data analytics under uncertainty, including: 1) low-rank and sparse modeling for robust graph construction and subspace discovery; 2) an efficient bilinear projection approach for multi-view time series classification; and 3) applications on outlier detection, visual intelligence, and knowledge transfer. I will conclude this talk by describing my future research plans in the interdisciplinary field of data science.

Biography: Sheng Li is a Ph.D. candidate at the Northeastern University, Boston, MA. He has broad interests in data science and machine learning, including low-rank matrix recovery, multi-view learning, time series modeling, outlier detection, visual intelligence, and causal inference. He has published 40 papers at leading conferences and journals including IJCAI, KDD, SIGIR, CIKM, ICDM, SDM, ICCV, IEEE Trans. KDE, IEEE Trans. NNLS, IEEE Trans. IP, and IEEE Trans. CSVT. He received the best paper awards (or nominations) at SDM 2014, ICME 2014, and IEEE FG 2013. He co-presented two tutorials in IJCAI 2016 and CVPR 2016. He is the recipient of the 2015 NEU’s Outstanding Graduate Student Research Award. He has served on program committees for several major conferences such as IJCAI, AAAI, PAKDD, FG and DSAA. He was a data scientist intern at Adobe Research.

3/10/2017 – Rana Forsati, PhD – Michigan State University
Matrix Completion and Distance Metric Learning with Side Information for Recommender Systems and Social Network Mining

Abstract: In recent years, recommender systems and social graph mining techniques using data-driven machine learning algorithms have been successfully employed to overcome the information overload and to extract insightful information in many online applications.  Among many algorithmic solutions, the matrix completion which aims at recovering a low-rank matrix from a partial sampling of its entries, has been proven as a successful method in collaborative filtering for recommender systems (e.g., the Netflix problem), missing data prediction, dimensionality reduction, and social graph mining (e.g., link prediction and network completion).

However, matrix completion methods perform poorly in practice especially when the observed matrix is sparse, some of the rows or columns are entirely missing– the so-called cold start problem or the observed ratings are not sampled uniformly at random. Recently, there has been an upsurge interest in utilizing other rich sources of side information about items/users, such as social or trust/distrust relationships between users and meta-data about items to compensate for the insufficiency of rating information and mitigate the cold-start users/items problem.

In this talk, we introduce a novel algorithmic framework for matrix completion that exploits the similarity information about users and items to alleviate the data sparsity issue and specifically the cold-start problem. In contrast to existing methods, our proposed algorithm decouples the following two aspects of the matrix completion to effectively exploit the side information: (i) the completion of a rating sub-matrix, which is generated by excluding cold-start users/items from the original rating matrix; and (ii) the transduction of knowledge from existing ratings to cold-start items/users using side information. We provide theoretical guarantees on the recovery error of the proposed decoupled algorithm and show through experiments on real-world data sets to demonstrate the merits of decoupled matrix completion.

We then discuss a unified framework to aggregate multiple sources of side information about users/items into a single distance metric that can be used in different recommendation methods. By modeling different types of side information as a similarity/dissimilarity constraint graph between entities, we cast the problem of learning from multiple sources as a distance metric learning (DML) problem from constraint graphs and introduce an efficient algorithm to learn such a metric. Aggregation of such information is of great importance, especially when a single view of the data is sparse or incomplete.

Biography: Rana Forsati is a post-doc researcher and instructor at Computer Science and Engineering department at Michigan State University since 2014. She obtained her Ph.D. (with honors) from Shahid Beheshti University (formerly known as The National University of Iran) in 2014. She also spent a year as a visiting research scholar at the University of Minnesota during her Ph.D. studies. Her research interests include applied machine learning, data mining, and optimization with applications in recommender systems, social graph analysis, and natural language processing.