close

Вход

Забыли?

вход по аккаунту

?

978-3-319-69459-7 17

код для вставкиСкачать
Identifying Opinion Drivers on Social Media
Anish Bhanushali(B) , Raksha Pavagada Subbanarasimha,
and Srinath Srinivasa
International Institute of Information Technology, 26/C, Electronics City,
Bangalore 560100, Karnataka, India
{anish.bhanushali,raksha.p.s}@iiitb.org, sri@iiitb.ac.in
Abstract. Social media is increasingly playing a central role in commercial and political strategies, making it an imperative to understand its
dynamics. In our work, we propose a model of social media as a “marketplace of opinions.” Online social media is a participatory medium where
several vested interests invest their opinions on disparate issues, and
actively seek to establish a narrative that yields them positive returns
from the population. This paper focuses on the problem of identifying
such potential “drivers” of opinions for a given topic on social media. The
intention to drive opinions are characterized by the following observable
parameters: (a) significant level of proactive interest in the issue, and
(b) narrow focus in terms of their distribution of topics. We test this
hypothesis by building a computational model over Twitter data. Since
we are trying to detect an intentional entity (intention to drive opinions), we resort to human judgment as the benchmark, against which
we compare the algorithm. Opinion drivers are also shown to reflect the
topical distribution of the trend better than users with high activity or
impact. Identifying opinion drivers helps us reduce a trending topic to its
“signature” comprising of the set of its opinion-drivers and the opinions
driven by them.
Keywords: Social media
1
· Drivers · Opinion marketplace
Introduction
The participatory nature of social media is increasingly impacting corporate and
political strategies, in addition to affecting the daily lives of individuals. Social
media is rapidly becoming the platform of choice for commercial and political
campaigns and is seen as a strategic tool for engaging with potential stakeholders.
While campaigns have always been a part of mainstream media, the participatory nature of social media makes the dynamics much more complex. In fact,
on social media, there may not be a clear distinction between a campaign and
an organically trending open discussion. Confirmation bias plays a major role
in online discussions [10] and even routine discussions are replete with different parties trying to implicitly promote their point of view (POV), essentially
making any trending topic, a collection of several micro-campaigns.
Given the dearth of understanding of such dynamics, there is a need for
suitable models. In our work, we approach this problem by modeling social media
c Springer International Publishing AG 2017
H. Panetto et al. (Eds.): OTM 2017 Conferences, Part II, LNCS 10574, pp. 242–253, 2017.
https://doi.org/10.1007/978-3-319-69459-7_17
Identifying Opinion Drivers on Social Media
243
as a “marketplace of opinions.” Social media is seen as a secondary marketplace,
where different vested interests invest their opinions on issues that affect them,
and actively promote their “investments” in order to gain greater acceptance
for such opinions. The “market share” of such opinions usually bring concrete
returns in the primary marketplace where the parties operate.
For instance, promoting positive opinions about one’s product may result in
greater sales. Similarly, promoting dissonance or distrust about an opponent in
elections may bring concrete results in terms of vote shares.
Social media offers a relatively small and equal opportunity cost for different parties to invest and grow their opinions. This is in contrast to mainstream
media, where viewers typically have very little opportunities to air their opinions.
As a result, opinion dynamics on social media are much richer and variegated.
Here, opinions not only clash and compete with one another, but compatible
opinions often “team up” to collectively increase their impact. Such constellations of mutually compatible opinions sustained over time, become narratives
which are powerful foundations that shape worldviews, policies and even the
course of history.
In this paper we address a specific problem under the proposed opinionmarketplace paradigm. This is the problem of identifying opinion “drivers” for a
trending topic – or user accounts that are intending to drive a specific opinion or
viewpoint on the issue. Identifying opinion-drivers and their respective opinions,
on a given trending topic helps us in reducing the trend down to two dimensions:
who are driving this trend, and in what directions are they driving it.
Opinion drivers are characterized by their latent intent to drive opinions.
While the intent cannot be directly observed, we posit that it manifests in the
following observables:
1. Proactive interest in the trending topic, and
2. Consistency or a narrow focus in one’s vocabulary.
We test this hypothesis on different trending topics on Twitter, by analyzing
the information content in the topics expressed by participants. Opinion drivers
are then identified by measuring the entropy in their expressed topics, tempered
with their participation rate.
2
Related Literature
Understanding underlying intent in social media communications, has elicited
a lot of research interest. Lampos et al. [9] measure users’ intention to vote for
a particular party during elections by focusing on users’ vocabulary and their
role (normal people, journalist, businessman etc.) as characteristic attributes for
prediction.
A problem closely related to driver identification is the “expert” identification
problem. Users who possess expertise in a topical area often exert great influence
and also drive conversations on that topic along specific lines. Cognos [7] is an
244
A. Bhanushali et al.
approach that uses Twitter lists for identifying topical experts. Users’ membership in lists representing specific topics, is used as a criteria to rank users for that
topic. Pal et al. [13] address this problem by clustering users into disjoint groups
called authorities and non-authorities, based on a measure of “self-similarity” or
topical consistency in users’ tweets. Wagner et al. [15] use classic topic models
like Latent Dirichlet Allocation (LDA) [2] to identify distributions of topic in
set of tweets. A measure called Normalized Mutual Information (NMI) is then
used to compare the entropy of topic distribution for a user with a ground truth
obtained from Wefollow (now called about.me).
Another study by Cha et al. [4] has shown that by observing number of
retweets, followers and mentions that a user has, we can determine whether
the user is an influencer. Bakshy et al. [1] address the question of quantifying
influence of a user. The main idea here is to give a score to each user based on
the seed post done by that user and how much cascade it has generated. They
observe bit.ly URLs in a tweet. If a user tweets some URL which was not posted
earlier by any other users whom that user is following, then that URL post will
be considered as one of the seed posts. A cascade of the seed post is defined as
the number of users who spread the seed post by retweeting, mentioning and
embedding link of seed post into their tweet.
A machine learning approach to identify campaigns is proposed by Ferrara
et al. [6]. In this approach they construct 423 features including user network,
user account property, timing, part-of-speech tags and sentiment. Since the proposed method uses supervised learning, it depends upon good training data. An
unsupervised learning method to identify campaigns is proposed by Li et al. [12]
based on Markov Random Fields. They build a network of activity bursts, users
and URLs and compute a probability of every user being a promoter of the burst.
In this method except the URL, other contents of tweets haven’t been used.
Lee et al. [11] identify campaigns by building a network of tweets which share
similar terms (called a “talking point”), after removing noise such as singleton
tweets from the network. Then a clique detection algorithm is used to find sets
of tweets that could potentially indicate a campaign. An improvement on this
study has been done by Zhang et al. [16]. In this study they measure the entropy
of only URLs posted by a user. Measuring entropy of URLs common between two
accounts (say user A & B) and dividing it with the sum of individual entropy for
URLs of users A and B gives the similarity score between two accounts. While
Lee et al. [11] build a graph of tweets, Zhang et al. [16] build a graph of users and
with their entropy-based similarity score as edge weight. From our observations,
campaigns and opinion drivers do not just use URLs – their aim is to get greater
acceptance for their opinions, which is why to detect opinion drivers, we need to
focus on actual terms rather than just URLs.
Borge-Holthoefer et al. [3] analyze the dynamics of social media communication using symbolic transfer entropy of time series of information flow. They
capture the characteristic time scale of the events on social media during various
stages of the event. They compare the time series of magnitude of twitter posts
across different locations, capturing both spatial and temporal aspects of the
Identifying Opinion Drivers on Social Media
245
tweets. It was observed that in most of the cases characteristic time scale drop
characterizes the beginning of a collective phenomenon. While the authors look
at the magnitude and directional information flow of tweets, content of tweets
and their intention has not been captured in this work. The work focuses on
identifying the structural signature of a social phenomenon whereas we focus on
identifying the semantic signature of a social media event.
Also, being a topical expert or influencer does not necessarily mean that the
user is an opinion driver. Their influence score or expertise are mainly functions
of their position in network rather than their intentions. This makes the driver
identification problem, an open problem.
3
Driver Identification: Formal Model
We use Twitter as the social media of choice for identifying opinion drivers.
However, the formal model for a driver is itself generic, and does not use any
Twitter-specific elements in its formulation. The proposed model can be easily
adapted to analyze trending activity on any other social media websites like
Facebook, Reddit etc.
We start with a topic t representing a trending activity on the social media.
A dataset Dt of the social media activity concerning topic t is extracted, represented as a set of posts. The term users(Dt ) represents the set of all users
who have contributed to Dt , and the term userposts(u) represents the set of all
posts by user u ∈ users(Dt ). The topic t is represented as an abstract space
characterized by a set of dimensions (t1 , . . . , tm ), each of which, represents a
term that is relevant to t, where m is total number of terms in Dt . Each user
u ∈ users(Dt ) is defined as a vector u = (f1 , . . . , fm ), where each element fi
represents the frequency of term ti as used by user u, measured in terms of the
number of posts by u for which, ti is relevant.
Activities of any user u, are compared against a hypothetical “null” user u0 ,
which is modeled based on expected levels of activity.
The support for a given term ti , for a given user u, is the probability of u
being the author of any post mentioning ti :
support(u, ti ) =
f req(ti , u)
|ti (Dt )|
(1)
Here, f req(ti , u) is the frequency of term ti by user u and ti (Dt ) is the set
of all posts in Dt that mention ti .
For the null user, support for each term is calculated as the expected support
averaged over all users:
support(u0 , ti ) =
1
|users(Dt )|
support(u, ti )
(2)
∀u∈users(Dt )
The “support vector” of user u, denoted as support(u) is a vector showing
the support of each term for user u. Our first evidence towards underlying intent
246
A. Bhanushali et al.
to drive opinions is by observing how focused is this support vector, compared
to the set of all terms describing t. This is formulated in terms of the entropy of
this vector.
To calculate the entropy, we first reduce the support vector into a simplex,
by normalizing the support values to make it into a probability vector:
pu (ti ) =
support(u, ti )
m
support(u, ti )
(3)
i=1
Entropy, which is the measure of spread or “scatteredness” of the vocabulary
of user u, is calculated as follows:
Hu = −
m
pu (ti ) log2 pu (ti )
(4)
i=1
Higher values of the entropy shows a scattered vocabulary, while low values
of the entropy indicates greater focus in the vocabulary of the user. Entropy
values for a given user, are compared against the “null” user u0 , whose entropy
is calculated as the expected entropy – which in turn, is calculated from the
expected support vector:
Hu0 = E(Hu ) = −
m
pu0 (ti ) log2 pu0 (ti )
(5)
i=1
In order to convert entropy values into an evidence score, we compute its complement, by comparing the entropy score against the maximum score obtained.
Evidence(u) =
(Ĥ + 1) − Hu
max((Ĥ + 1) − Hu )
(6)
u
Here, Ĥ is the maximum value of entropy obtained from all the users in the
corpus. A high value of the evidence score indicates that the vocabulary of the
user is highly focused on a small set of terms, in comparison with the overall
vocabulary used by the community for this topic.
The evidence score is by itself insufficient to conclude the intent to drive
opinions. A casual user who has participated very minimally with just one post,
would also show low values of entropy. Users who intend to drive opinions, would
not only be focused in their vocabulary, but also display significant amounts of
activity.
To measure this, we model a prior distribution of user activity manifested
by an intent to drive opinions. The intention to drive opinions is modeled as
a Dirichlet process DP (B, α), built around a base distribution B, based on
observed activity distribution:
B(u) =
|posts(u)|
|Dt |
(7)
Identifying Opinion Drivers on Social Media
247
where B is the base distribution and α is a positive real number called scaling
parameter.
A Dirichlet process DP (B, α) generates a set of data elements {X1 , X2 , ..,
Xn } such that for the nth data element [14]:
1. If n = 1, value of X1 is drawn from base distribution B
2. If n > 1 then,
α
value of Xn is drawn from base distribution B
(a) With probability α+n−1
nu
where nu is the number of times
(b) Set Xn = u with the probability α+n−1
u has been drawn in the past.
With N data elements thus generated, the probability of user u is defined as:
p(u) =
nu
N
(8)
Dirichlet processes are useful to model latent variables affecting observable phenomena. While the base distribution shows observable activity, the Dirichlet
process uses relative variations in observable activity levels to estimate a prior
probability of a latent intentional variable, driving the activity. After the completion of Dirichlet process all users in Dt are assigned new prior probability
which represents their latent intention to drive an opinion. For our experiments,
we have set α = 0.5. This value is based on empirical verification of the stability of proposed prior values generated by the Dirichlet process across several
synthetic generation runs, given the observed base distribution.
The prior distribution of the null user is given by a least-biased formulation
of expected activity, by expecting all users to have the same levels of activity:
p(u0 ) =
1
|users(Dt )|
(9)
Given the above, the posterior likelihood that a given user is a driver of
opinions, is given by:
L(u) = Evidence(u) · p(u)
(10)
After calculating the posterior probability of all the users in the dataset,
users are ranked in descending order of their posterior probability. Empirical
examination of the distribution of L(u) scores for several datasets, show that it
is a very skewed distribution with steep changes in values from one user to the
next. To classify the ranked list of users into drivers and non-drivers, we compute
the pair-wise difference in scores between a given user and the previous one. The
first point at which the differential is lesser than the expected differential score,
we mark as the dividing point. We use the same technique to also mark a second
dividing point to finally represent the dataset as either 2 or 3 classes.
4
Results and Evaluation
For the purpose of evaluating the model we considered Twitter data for several
topics. A topic is either a keyword or a hashtag (For example “#jallikattu”,
248
A. Bhanushali et al.
“USElections2016”, “GlobalWarming” etc.). For each topic on an average 2488
tweets were fetched from Twitter. Before starting the process of identifying opinion drivers, the tweets were pre-processed. This pre-processing involved removal
of stop-words, punctuations, urls, searched keyword or hashtag, whitespaces and
converting all terms to lowercase. After the pre-processing step, with the set of
terms thus obtained, opinion drivers score is computed for all users in the respective topic.
Some of the topics were known apriori to be promoted campaigns (#Ignis and
#SpiceJetBigOrder), while others were organically trending topics. Intentionally
promoted campaigns and organically trending topics were easily distinguishable,
based on the location of their null user. Figure 1 shows the distribution of posterior scores (Eq. 10) and the position of the null user for an organically trending
topic, while Fig. 2 shows the distribution of posterior scores and the position of
the null user for a promoted campaign. The null user scores were found to be
significantly inside the overall distribution for promoted campaigns, while the
null user score is below the scores of other users in organically trending topics.
Fig. 1. Posterior probability score distribution for users of topic ‘#Kansas’ divided
in to 3 classes along with null user
Fig. 2. Posterior Probability score distribution for users of topic ‘#SpiceJetBigOrder’
divided in to 3 classes along with null user
Identifying Opinion Drivers on Social Media
249
In topics that are organically trending, the overall vocabulary is much richer,
given high expected values of entropy, thus pushing the null user below.
Evaluation of scores was performed in two ways. In the first method, evaluators were described the definition of opinion drivers and evaluators were asked
to classify users of various topics into three classes namely: “definitely driver”
(clearly showing an intention to drive opinions), “may be driver” (potentially
showing an intention to drive opinions) and “definitely not driver” (clearly showing no intention to drive opinions) based on the posts/tweets made by the respective user. In the second method, a 2-class classification scheme was used, where
users were classified as either “driver”or “non-driver”. In both the evaluation
methods all users who have only one tweet in the dataset were removed from
consideration.
Evaluation was performed by 19 human evaluators, largely coming from
a graduate student pool, who were aware of and understood the problem.
Each evaluator was provided with a list of 20 users from each topic chosen
from five topics. The list provided to the evaluators comprised of users chosen
from all classes with equal random probability.
Each evaluator classified 20 users across 5 topics, giving 100 values from each
evaluator. Evaluation results form a matrix with dimensions 100 × 19. Rows
represents users (twitter handles) and column represents evaluator scores. The
modal value of the class in a row is chosen as the result of the evaluation for
that twitter handle. The confidence of this evaluation is calibrated based on
agreement across evaluators. This is calculated as:
Agreementuser[i] =
count(mode(i))
total number of evaluators
(11)
Where count represents number of times the modal value is repeated in the list.
Table 1 depicts the average agreement scores across all evaluators for both 3-class
and 2-class models. Agreement scores were also computed by adding the algorithmically computed class as a column in the evaluation matrix. This is shown
in rows 3 and 5 of Table 1 for both 2-class and 3-class models respectively. For
2 class evaluation, agreement score between the evaluators is 0.75 and agreement score between evaluators and the proposed model result is 0.78. For 3 class
evaluation, agreement score between the evaluators is 0.60 and agreement score
between evaluators and the proposed model result is 0.62. We interpret this to
mean that even if the algorithm were to replace one of the human evaluators, it
would result in little or no change in the overall consensus.
Algorithmic results were also calibrated for its precision and recall leading
to the F1 measure. Precision was computed as a measure of the proportion of
users identified as drivers by the algorithm, who were also identified as drivers
by the evaluators. Recall was a measure of the proportion of the set of all users
who were identified as drivers by the evaluators, who were also identified by the
algorithm. The F1 score is the harmonic mean of precision and recall scores.
Finally, Cohen’s kappa [5] was used to calibrate agreement between evaluation scores based on modal values of the class, and algorithmically computed
250
A. Bhanushali et al.
Table 1. Evaluation scores for different metrics
No. of classes Metric
Score
2
F1 score
0.77
2
Agreement score (between evaluators)
0.75
2
Agreement score (between evaluators and algorithm result) 0.78
3
Agreement score (between evaluators)
3
Agreement score (between evaluators and algorithm result) 0.62
3
Cohen’s kappa (quadratic weights)
0.60
0.67
class for each user. We obtained a score of 0.67 which indicates positive correlation between evaluator and algorithm scores.
Table 2. Kendall’s correlation score comparing impact, activity and driver-score based
ordering of terms with global ordering of terms based on frequency
Topic
Users with Users with
high impact high activity
Election
Opinion
drivers
0.30
0.09
0.36
#TrumpGlobalGag 0.38
0.41
0.65
#Kansas
0.31
0.36
0.41
ABVP
0.31
0.51
0.58
Evaluating robustness of the model. We also asked how sensitive is the computed
score to the specifics of the dataset and whether the driver scores would change
with more or less data collected. The model is said to be robust, if it is impervious
to minor perturbations in the evidence. To compute this, tweets of a topic were
first sorted according to their time stamp. Driver scores were then iteratively
computed, with each iteration reducing the data size by 3%. This process is
repeated until data size is reduced to 45% of its original size. At each iteration
top driver is noted. If the model is robust with respect to size of data, then the
same user should be the top driver in all the iterations.
In our experiments on different topics, the top driver for the dataset remained
intact on an average of 79% of the time across the iterations.
Comparison with user activity and user impact. We also address a hypothetical
question whether opinion drivers can be trivially detected by the impact they
have had on the discussion, or by their amount of activity. Since an opinion
driver plans to drive collective opinions along specific directions, we compare
opinion drivers and users with high activity or impact (based on retweets and
likes), against collective opinion rankings. Collective opinion is modeled by a
global ranking of terms, with respect to the number of tweets they appear in.
Identifying Opinion Drivers on Social Media
251
Similar term distributions are computed for each model. We first score users
according to the respective model (activity, impact and driver). Terms used by
these users are given an aggregated score by adding the respective model score
every time a term is used by the user. Terms are then ranked based on this
aggregated score, and compared with the global ranking using Kendall rank
correlation coefficient [8].
Table 3. Signature of the narratives
Topic
Drivers
#Ignis
Driver 1 {@nexaexperience} {launch}
Top-1st word
Top-2nd word
Driver 2 {@nexaexperience} {electronation}
Driver 3 {@nexaexperience} {electronation,
watch}
ABVP
Driver 1 {#delhiuniversity} {ramjas,
college}
Driver 2 {#modiministry} {ramjas,
college}
Driver 3 {well, done}
{traitors}
#Kansas Driver 1 {feb, pm}
Driver 2 {#espn2, #tcu}
Driver 3 {kansas,
#noplacelikeks}
Top-3rd word
{electronation,
ignis}
{launch,
looking,
forward}
{launch, live}
{umar, khalid}
{khalid, protest,
clash}
{medicine, long,
due}
{#shooting}
{man}
{#aclu}
{#doj}
{#ksbucketlist} {city}
Table 2 shows the average Kendall rank correlation coefficient scores calculated for these three sets for different topics. It is evident that for all topics that
were considered, rank ordering of terms by opinion drivers agreed more than
that of users with high impact or activity. This indicates that opinion drivers
represent the topical narrative more accurately.
Signature generation. The objective of driver identification is to reduce a trending topic into a “signature” representation – comprising of its drivers and the
topics they represent. Table 3 shows top 3 keywords for the top 3 drivers for
different topics. The user handles of the top three drivers have been anonymized
for protecting their privacy.
The consistency of the top terms for drivers of the promoted campaign
(#Ignis) is copious. Organically trending topics have drivers with different
agenda, leading to different sets of top terms associated with them.
252
5
A. Bhanushali et al.
Conclusions
We conjecture that modeling social media as an opinion marketplace represents
a significant leap in our understanding of its dynamics. This work represents preliminary research results in this direction of pursuit, where we reduce a trending
topic on social media to a set of opinions and their drivers. This helps in better understanding of the latent intentional forces that are trending this topic.
Reducing a trending topic into a set of directions and their drivers will be useful
for several stakeholders – for government organizations to identify any suspicious
activity, product companies to analyze and improve their products, Web science
researchers to understand social media activities, social media enthusiasts to
observe and participate in the direction they like. In future work, we plan to
improve upon this model and extend this to characterize more elements of the
opinion marketplace. We also plan to incorporate the temporal aspect of social
media posts as a part of the model.
References
1. Bakshy, E., Hofman, J.M., Mason, W.A., Watts, D.J.: Everyone’s an influencer:
quantifying influence on Twitter. In: Proceedings of the Fourth ACM International
Conference on Web Search and Data Mining, WSDM 2011, New York, NY, USA,
pp. 65–74. ACM (2011)
2. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. J. Mach. Learn.
Res. 3, 993–1022 (2003)
3. Borge-Holthoefer, J., Perra, N., Gonçalves, B., González-Bailón, S., Arenas, A.,
Moreno, Y., Vespignani, A.: The dynamics of information-driven coordination phenomena: a transfer entropy analysis. Sci. Adv. 2(4), e1501158 (2016)
4. Cha, M., Haddadi, H., Benevenuto, F., Gummadi, K.P.: Measuring user influence
in Twitter: the million follower fallacy. In: Proceedings of International AAAI
Conference on Weblogs and Social Media, ICWSM 2010 (2010)
5. Cohen, J.: A coefficient of agreement for nominal scales. Educ. Psychol. Measur.
20(1), 37–46 (1960)
6. Ferrara, E., Varol, O., Menczer, F., Flammini, A.: Detection of promoted social
media campaigns. In: Tenth International AAAI Conference on Web and Social
Media (2016)
7. Ghosh, S., Sharma, N., Benevenuto, F., Ganguly, N., Gummadi, K.: Cognos: crowdsourcing search for topic experts in microblogs. In: Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information
Retrieval, SIGIR 2012, New York, NY, USA, pp. 575–590. ACM (2012)
8. Kendall, M.G.: A new measure of rank correlation. Biometrika 30(1–2), 81 (1938)
9. Lampos, V., Preoţiuc-Pietro, D., Cohn, T.: A user-centric model of voting intention
from social media. In: Proceedings of the 51st Annual Meeting of the Association
for Computational Linguistics, ACL 2013, pp. 993–1003 (2013)
10. Lee, J.K., Choi, J., Kim, C., Kim, Y.: Social media, network heterogeneity, and
opinion polarization. J. Commun. 64(4), 702–722 (2014)
11. Lee, K., Caverlee, J., Cheng, Z., Sui, D.Z.: Content-driven detection of campaigns
in social media. In: Proceedings of the 20th ACM International Conference on
Information and Knowledge Management, CIKM 2011, New York, NY, USA, pp.
551–556 (2011). ACM
Identifying Opinion Drivers on Social Media
253
12. Li, H., Mukherjee, A., Liu, B., Kornfield, R., Emery, S.: Detecting campaign promoters on twitter using markov random fields. In: Proceedings of the 2014 IEEE
International Conference on Data Mining, ICDM 2014, Washington, DC, USA, pp.
290–299. IEEE Computer Society (2014)
13. Pal, A., Counts, S.: Identifying topical authorities in microblogs. In: Proceedings
of the Fourth ACM International Conference on Web Search and Data Mining,
WSDM 2011, New York, NY, USA, pp. 45–54. ACM (2011)
14. Teh, Y.W.: Dirichlet Process, pp. 280–287. Springer, Boston (2010)
15. Wagner, C., Liao, V., Pirolli, P., Nelson, L., Strohmaier, M.: It’s not in their tweets:
modeling topical expertise of twitter users. In: Proceedings of the 2012 ASE/IEEE
International Conference on Social Computing and 2012 ASE/IEEE International
Conference on Privacy, Security, Risk and Trust, SOCIALCOM-PASSAT 2012,
Washington, DC, USA, pp. 91–100. IEEE Computer Society (2012)
16. Zhang, X., Zhu, S., Liang, W.: Detecting spam and promoting campaigns in the
twitter social network. In: 2012 IEEE 12th International Conference on Data Mining, ICDM 2012, pp. 1194–1199 (2012)
Документ
Категория
Без категории
Просмотров
0
Размер файла
482 Кб
Теги
978, 69459, 319
1/--страниц
Пожаловаться на содержимое документа