Key Opinion Leaders in Recommendation Systems: Opinion Elicitation and Diffusion

Key Opinion Leaders in Recommendation Systems: Opinion Elicitation and Diffusion

·

12 min read

Table of contents

No heading

No headings in the article.

Google presentation: https://docs.google.com/presentation/d/1FLrmmv2Gw23gym-Do3XSFd66wIAZqeXW1W4cMa25Yc8/edit?usp=sharing

The power of Key Opinion Leaders (KOLs)

A Mediakix survey conducted at the end of 2018 found that 49% of consumers depend on KOLs’ recommendations for their purchase decisions.

With important positions in the community (e.g. have a large number of followers), KOLs’ opinions can diffuse to the community and further impact what items we buy, what media we consume, and how we interact with online platforms.

😵 Problems

Despite the importance of investigating the influence of KOLs in recommendation systems, however, it is a non-trivial task due to two major challenges:

  1. Elicitation: Compared to regular users, KOLs tend to express their opinions on items explicitly (e.g., review, rating or tagging) rather than leave implicit feedback (e.g., views, clicks or purchases). More important, such explicit interactions are inherently multi-relational. On the other hand, the opinions of KOLs could have distinct meanings (e.g., the tag “fantastic” and tag “terrible” are semantically different). So how can we extract the elite opinions of KOLs from such multi-relational data?

  2. Diffusion: For example, users tend to purchase makeup products with the recommendation of Beauty-KOLs they are following. Meanwhile, previous research [2, 3, 4] has shown that user preferences on items could diffuse through high-order connectivity (e.g., in Figure 1, the latent preference of user A can diffuse via the transitive path A → q → B → w, to items that he/she hasn’t interacted with). Therefore, the influence of KOLs will also be propagated to those non-direct followers in the community. So how can we model this elite opinion diffusion process for improving recommendations?

💡 Solutions

GoRec: a novel end-to-end Graph-based neural model to incorporate the influence of KOLs for Recommendation

  1. Elicitation: A translation-based embedding method to elicit the opinions of KOLs.

  2. Diffusion: Multiple Graph Neural Network (GNN) layers to model the diffusion process of elite opinions.

Before scrutinizing this paper’s solution, let’s define the KOLs in real-world datasets and perform some initial analyses to confirm some hypotheses.

In two real-world datasets including Goodreads (a book-sharing community) and Epinions (an e-commerce review-sharing platform). We ranked all the accounts based on their numbers of followers and treat the top accounts as KOLs. After the definition of KOLs, we can start to confirm these hypotheses including:

▶️ A small number of key opinion leaders (KOLs) can provide sufficient coverage. In Goodreads, we found that while considering only the top 500 KOLs, there are more than 95% of the users follow at least one of these KOLs. Consequently, a small number of KOLs can provide sufficient coverage. Figure 2(a) showed more details.

Figure 2a. Coverage: The percentage of users following at least one of the top (key) opinion leaders. More than 95% of users follow at least one of the top 500 accounts.

▶️ Users are shifted by the KOLs they are following.

We represented each user with a simple binary vector over all books, in which a “1” indicates that the user has left implicit feedback on this book, and with five different binary vectors (1 to 5) for each KOL to represent the books’ ratings. We concluded that the explicit opinions of KOLs could directly influence what their followers consume. Figure 2(b) showed more details.

Figure 2b. Books read by users are more similar to books with higher ratings from key opinion leaders they are following.

▶️ Compared to ordinary users, KOLs tend to express opinions on items explicitly.

In Figure 2(c), we can see that ordinary users and KOLs leave implicit feedback on a similar number of books, indicating that both are active in their use of Goodreads. Nevertheless, KOLs are more engaged in explicitly sharing their opinions on books.

Figure 2c. While leaving a similar number of implicit feedback, key opinion leaders prefer to show their opinions on items via explicit interactions (reviews, ratings, self-defined tags).

With these observations in mind, we can concentrate on solving the two challenges of GoRec.

Problem Setting and Notation

In this work, we aimed to provide Top-K recommendation from a candidate set of M items I = {i1, i2,…, iM} to a set of N users U = {u1, u2,…, uN }.

ℹ️ User-item Interaction Graph: a bipartite graph G = (V, W) in which the set of nodes V consists of all the users and items. The edge (u, i) ∈ W denotes that user u has implicit feedback on item i.

Elite Opinion Graph: We used L = {l1, l2,…, lp } to represent the set
of key opinion leaders (KOLs) and O = {o1, o2,…, oQ} to represent Q different types of explicit opinions. It can produce many opinion triplets (l, o, i) representing kol l left opinion o on item i, and by these triplets, we can construct a directed elite opinion bipartite graph Go.

💫 User-KOL Following: Fu ⊂ L to represent the set of KOLs followed by user u ∈ U. And we let UL = ∅ (no overlapping between ordinary users and KOLs).

🏃 Firstly, we start by eliciting the opinions from KOLs toward improving
the quality of recommendation

Translation-based Opinion Elicitation

As analogous to the data structure of the knowledge graph, the resulting
elite opinion graph Go consists of many valid opinion triplets. For
example, a triplet (l₁, Review: wizard, Harry Potter) denotes that
KOL l₁ mentions the word wizard in a review for item Harry Potter. We can also construct these opinion triplets based on ratings or tags provided by KOLs, and get triplets like (l1, Rate: 5, Harry Potter) or (l1, Tag: fiction, Harry Potter).

Knowledge Graph: a semantic network obtained by connecting heterogeneous information, providing the ability to analyze problems from a “relationship” perspective.

Our goal is to generate “effective embedding” for both items and KOLs in a continuous vector space while preserving the multi-relations (opinions) between them. We will list 3 features of Go followed by the corresponding design we propose in the opinion elicitation process:

☝️ Diverse relationship: Opinions come with a distinct meanings,
e.g., tags “fantastic” and “terrible” are semantically different.

Translation from KOL to Item. Adopting a similar idea in multi-relational
graph embedding [6, 7, 8, 9]. Given a valid opinion triplet (k, o, i), we want to ensure that the embedding of the item is close to the embedding of KOL k plus the embedding of opinion o. Let s(k, o, i) denote the scoring function for the translation operation, with which a larger value means better translation.

To ensure the graph embedding effectivity, our objective is to maximize the translation score for all the valid triplets while minimizing that for the invalid triplets (k wouldn’t attach opinion o on i′). As the below formulation:

in which [·]+ ≜ max(0, ·), and γ is a hyper-parameter that denotes the threshold the model used to separate the valid triplets and invalid triplets.

Later we will describe how to calculate scores.

✌ ️Many-to-Many relations: On one hand, KOLs and items can be multi-relational (e.g. A KOL rated 5 and left a comment on a book). On the other hand, KOLs can endow opinions with their attitudes certain (e.g. each KOL has his/her criteria for tagging a book with “BestOf2019”).

Dynamic Mapping Matrix. To handle the Many-to-Many relations, a common strategy is to project KOLs and items to an opinion-specific space before the translation operation. We adopt a dynamic mapping matrix [7] which is determined by both the opinion and the KOL (or item). Each kol, item, and opinion is represented by two vectors. One vector acts as its embedding, while the other vector is used to transfer KOL (or item) into opinion-specific space (as in Figure 3). Given a triple (k, o, i), we will initialize dense vectors kᵉ, kᵗ, oᵉ, o**ᵗ*, iᵉ, iᵗ*, with superscript e meaning embedding and t meaning transfer respectively.

Remember that we aim to calculate the embedding translation score for the triples, we need to transfer those embeddings to the same space. Accordingly, we construct the mapping matrices as below:

Mapping matrix for transferring kᵉ to the space of o

Mapping matrix for transferring iᵉ to the space of o

in which I denote the identity matrix for initializing the mapping matrix. Thus we get the projected representation of k and i:

We can utilize the projected k and i to evaluate the translation distance for triple (k, o, i). Larger s(k, o, i) means k and i are close to each other with translation o, i.e, it is more likely that k attaches opinion o to i:

We use L2-norm to calculate the distance empirically.

👌 Preference Signals: KOLs have preferences on the items they would interact with, e.g., a romantic book lover may seldom leave any feedback on horror novels.

Personalized Ranking Model. A typical assumption is that the items with feedback from the user are preferred over those without. We also can utilize these preference signals. Following the basic idea in matrix factorization, we use the multiplication between kᵉ and iᵉ, that is p(k, i) = kᵉiᵉ, to capture the preference of k on i. Positive pair (k, i) representing k has left feedback on i and negative pair (k, i′) meaning k has not left feedback on i. The objective function to model these preference signals is:

Use Bayesian Personalized Ranking (BPR) [10] to maximize the difference of preference scores between the positive and negative pair, δ(·) denoting the Sigmoid function, and S is the training dataset.

Example of matrix factorization

👏 Now we finish the opinion elicitation, leading to the following loss function:

β is used to adjust the weight of pairwise loss in capturing the preference signals.

By minimizing this joint loss, we will get the set of embeddings K* for KOLs and I for items, which inherit both the explicit information and preference signals in the elite opinion graph *Go.

🏃 Finally, let’s highlight how to enrich the initial user/item embeddings with elite opinions and how to model the elite opinion diffusion process with graph neural networks.

Fusing Layer (Users)

Each user is associated with an embedding eᵤ to represent the initial interest, which can be trained from his/her one-hot index with a fully-connected dense layer. Moreover, we know that a user has different attention to each KOLs. We can model the dynamic (personalized) linkage between users and KOLs. Given that Fu is the set of KOLs that u is following, our objective function of elite opinions influence to user u is as below:

α is a trainable attentive weight

And α (the weight of KOL p’s influence on user u) can be calculated as:

Here | | represents the concatenation operation. W and b is the weight matrix and bias for the attention layer. z is a transformation vector to get embeddings (then all ReLu(‧) in the same space).

👏 By fusing nᵤ (elite opinions influence to user u) with the initial embedding of u, we can obtain the cornerstone for the opinion diffusion:

W is a transformation matrix to get embedding (making the concatenated vectors into the same space)

Fusing Layer (Items)

Similarly, each item will start with a trainable dense representation eᵢ, which is associated with its index. Since the KOLs can influence how the whole community views an item, we want to complement eᵢ with the KOL-defined features iᵉ. Thus we adopt a similar fusion operation to generate the enriched representation of item i:

W is a transformation matrix and iᵉis the embedding gained from Dynamic Mapping Matrix for item i.

Opinion Diffusion with GNNs

As Figure 1. showed, user preferences on items could diffuse through high-order connectivity, thus the elite opinions from KOLs will also be propagated to those non-direct followers in the community. In this paper, we propose
to model this opinion diffusion process by virtue of Graph neural networks (GNNs).

The core idea of GNNs is that each layer learns the node embeddings by aggregating the features of neighbors. On the user-item bipartile graph, given the sets of neighbors Nu and Ni which are directly connected with u and i correspondingly, we formulate the message passing on the edge (u, i) from i to u as:

xᵢ is the representation of i with influence from KOLs and Wu(0) denotes a trainable transforming matrix for users at layer 0.

(|Nu | |Ni |)^-1/2 is a normalization constant between u and i. The reason for dividing Nu is to average all “item to current user” messages, and Ni is to calculate item message propagation to each adjacent user (e.g. item A has 4 messages and 2 neighbors, and each neighbor gets 4/2 messages, not 4).

Then we sum up all the messages passed to u to generate its representation xᵤ(1):

τ is the activation function and we choose ReLu in this work empirically

Similarly, we can generate the representation of item i at this layer with:

After generating the representation of u and i from the first GNN layer, we can further capture the high-order diffusion by stacking multiple GNN layers. Specifically, at the Lth layer, we will have:

xᵤ(L) at layer L inherits embeddings of users and items from previous layers.

Lastly, we can capture the diffusion of opinions in multiple-order user-item connectivity with GNNs.

Our final step is to infer the user u’s preference for all the items. 😍

We use a fully-connected layer to decode the output from the GNN layers. That is, for u, we will decode xᵤ(L) to reconstruct his/her feedback vector yᵤ:

V and b′ are the weight matrix and bias term correspondingly. And δ(·) represents the Sigmoid function.

The objective is to minimize reconstruction loss:

Su is a binary masking vector. Since the feedback usually is extremely sparse, we only consider all the 1s in yᵤ while calculating the loss.

🎊 Finally, we can write the objective function of our final model (GoRec):

Thus we reach our GoRec model (in Figure 4 ) which combines both
tasks end-to-end with a hyper-parameter λ to balance the tasks.

Experiment results

Through experiments on Goodreads and Epinions, the proposed model outperforms state-of-the-art approaches by 10.75% and 9.28% on average in Top-K item recommendations.

And that’s a wrap! Enjoy. 🎆

References:

  1. Jianling Wang, Kaize Ding, Ziwei Zhu, Yin Zhang, and James Caverlee. 2020. Key Opinion Leaders in Recommendation Systems: Opinion Elicitation and Diffusion. In WSDM.

  2. Yun He, Haochen Chen, Ziwei Zhu, and James Caverlee. 2018. Pseudo-Implicit Feedback for Alleviating Data Sparsity in Top-K Recommendation. In ICDM.

  3. Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. 2019. Neural Graph Collaborative Filtering. In SIGIR.

  4. Jheng-Hong Yang, Chih-Ming Chen, Chuan-Ju Wang, and Ming-Feng Tsai. 2018. HOP-rec: high-order proximity for implicit recommendation. In RecSys.

  5. Introduction to Question Answering over Knowledge Graphs

  6. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In NeurIPS.

  7. Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dynamic mapping matrix. In ACL.

  8. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In AAAI.

  9. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In AAAI.

  10. Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In UAI.