During my scholarly years I have been actively participating in pedagogical activities including teaching in leading role as well as assistanceship. I have also done supervision both on master's level and phd level, as listed below.
The course covers concepts, methodologies and technologies for building modern web sites. This includes markup languages, scripting languages, event-driven programming and databases, which are used to create real-world web applications. In addition to technologies, the course also covers architecture and graphical user interfaces.
How to develop a software system? In this course we are going to present a set of methods and techniques used to guide the software development process. The methods and techniques represent current developments in software engineering area. In particular, they include Unified Process (UP), Extreme Programming (XP), Agile modeling and some other both already known and relatively new approaches.
Web services offer a new and evolving paradigm for building distributed applications. They enable any organization or individual to make its digital assets available with unprecedented ease and convenience. This course introduces fundamental principles and techniques for building Web services as well as gives training in application of the techniques to programming particular Web services.
This course introduces fundamental principles and techniques of Distributed Artificial Intelligence (DAI), as well as the usage of such techniques for creating applications in distributed computing environments. Central to the course are the concepts of "intelligent agents", as a paradigm for creating autonomous software components, and “multi-agent systems” as a way of providing coordination and communication between individual autonomous software components.
OLLDA: Dynamic and Scalable Topic Modelling for Twitter , (concluded in 2015) ICT/KTH
Providing high quality of topics inference in today's large and dynamic corpora, such as Twitter, is a challenging task. This is especially challenging taking into account that the content in this environment contains short texts and many abbreviations. This project proposes an improvement of a popular online topics modelling algorithm for Latent Dirichlet Allocation (LDA), by incorporating supervision to make it suitable for Twitter context. This improvement is motivated by the need for a single algorithm that achieves both objectives: analyzing huge amounts of documents, including new documents arriving in a stream, and, at the same time, achieving high quality of topics’ detection in special case environments, such as Twitter.
The proposed algorithm is a combination of an online algorithm for LDA and a supervised variant of LDA - labeled LDA. The performance and quality of the proposed algorithm is compared with these two algorithms. The results demonstrate that the proposed algorithm has shown better performance and quality when compared to the supervised variant of LDA, and it achieved better results in terms of quality in comparison to the online algorithm. These improvements make our algorithm an attractive option when applied to dynamic environments, like Twitter. An environment for analyzing and labelling data is designed to prepare the dataset before executing the experiments. Possible application areas for the proposed algorithm are tweets recommendation and trends detection.
The question that whether Twitter data can be leveraged to forecast outcome of the elections has always been of great anticipation in the research community. Existing research focuses on leveraging content analysis for positivity or negativity analysis of the sentiments of opinions expressed. This is while, analysis of link structure features of social networks underlying the conversation involving politicians has been less looked. The intuition behind such study comes from the fact that density of conversations about parties along with their respective members, whether explicit or implicit, should reflect on their popularity. On the other hand, dynamism of interactions, can capture the inherent shift in popularity of accounts of politicians. Within this manuscript we present evidence of how a well-known link prediction algorithm, can reveal an authoritative structural link formation within which the popularity of the political accounts along with their neighbourhoods, shows strong correlation with the standing of electoral outcomes. As an evidence, the public time-lines of two electoral events from 2014 elections of Sweden on Twitter have been studied. By distinguishing between member and official party accounts, we report that even using a focus-crawled public dataset, structural link popularities bear strong statistical similarities with vote outcomes. In addition we report strong ranked dependence between standings of selected politicians and general election outcome, as well as for official party accounts and European election outcome.
Liu Pu Zheng
Users of social networks have shown an increasing concern for exposing their personal data to untrusted entities in order to receive recommendations. In this work, we describe the components of a privacy-aware collaborative filtering based recommender framework which targets two important issues in recommender systems operating in a social network: privacy concern of profile owners and sparsity of social trust among users in a social network. Assuming an initial global privacy in the social network, the framework employs a probabilistic matrix factorization technique to estimate the quality of the missing trust relation between each pair of users. Because of the latent features inferred by matrix factorization, the resulting trust is an augmentation of both social relation and user similarity driven trust. We introduce a privacy inference model which exploits the underlying inter-entity trust information to obtain a personalized privacy view for each individual in the social network. Using this personalized privacy view, we employ an off-the-shelf collaborative filtering recommender system to make predictions. Experimental results show that the proposed approach obtains better accuracy than similar non-privacyaware recommender systems, while at the same time meeting profile privacy concerns.
Collaborative filtering(CF) recommender systems are among the most popular approaches to solving the information overload problem in social networks by generating accurate predictions based on the ratings of similar users. Traditional CF recommenders suffer from lack of scalability while decentralized CF recommenders (DHT based, gossip based etc.) have promised to alleviate this problem. Thus, in this thesis we propose a decentralized approach to CF recommender systems that uses the T-Man algorithm to create and maintain an overlay network that in turn would facilitate the generation of recommendations based on local information of a node. We analyze the influence of the number of rounds and neighbors on the accuracy of prediction and item coverage and we propose a new approach to inferring trust values between a user and its neighbors. Our experiments on three important datasets show an improvement of prediction accuracy relative to previous approaches while using a highly scalable, decentralized paradigm. We also analyze item coverage and show that our system is able to generate predictions for significant fraction of the users, which is comparable with the centralized approaches
The internet is being integrated in nearly every aspect of daily life of individuals. Social networks are made up of user profiles which are collection of user’s personal data and its relation with other users. Many relations between users are based on trust but trust and privacy are not captured and presented in profiles and personalized recommendations. This introduces a need for an intelligent social mining service which can analyze a person’s profile or related data on the basis of matching corresponding interests or likings. In this work, we have proposed for a generic architecture for social web mining where we can create a user model for users based on their tweets and mine their data to infer relationships among them and based on them we make suggestions. Our framework captures the trust between individuals based on their user models, while to preserve their privacy, trust is used to further filter out more valuable connections. We present (initial) experimental results with a 2009 twitter dataset.
Most cooperative businesses rely on some form of social networking system to facilitate user profiling and networking of their employees. To facilitate the discovery, matchmaking and networking among the co-workers across the enterprises social recommendation systems are often used. Off-the-shelf nature of these components often makes it hard for individuals to control their exposure as well as their preferences of whom to connect to. To this end, trust based recommenders have been amongst the most popular and demanding solutions due to their advantage of using social trust to generate more accurate suggestions for peers to connect to. They also allow individuals to control their exposure based on explicit trust levels. In this work we have proposed for an enterprise trust-based recommendation system with privacy controls. To generate accurate predictions, a local trust metric is defined between users based on correlations of user’s profiled content such as blogging, articles wrote, comments, and likes along with profile information such as organization, region, interests or skills. Privacy metric is defined in such a way that users have full freedom either to hide their data from the recommender or customize their profiles to make them visible only to users with defined level of trustworthy.
Collaborative Filtering based on similarity suffers from a variety of problems such as sparsity and scalability In this paper, we propose an ontological model of trust between users on a social network to address the limitations of similarity measure in Collaborative Filtering algorithms. For enhancing the constructed network of users based on trust, we introduce an estimate of a user's trustworthiness called T-index to identify and select neighbors in an effective manner We employ T-index to store raters of an item in a so-called TopTrustee list which provides information about users who might not be accessible within a predefined maximum path length. An empirical evaluation shows that our solution improves both prediction accuracy and coverage of recommendations collected along few edges that connect users on a social network by exploiting T-index. We also analyze effect of T-index on structure of trust network to justify the results.