ellipsis flag icon-blogicon-check icon-comments icon-email icon-error icon-facebook icon-follow-comment icon-googleicon-hamburger icon-imedia-blog icon-imediaicon-instagramicon-left-arrow icon-linked-in icon-linked icon-linkedin icon-multi-page-view icon-person icon-print icon-right-arrow icon-save icon-searchicon-share-arrow icon-single-page-view icon-tag icon-twitter icon-unfollow icon-upload icon-valid icon-video-play icon-views icon-website icon-youtubelogo-imedia-white logo-imedia logo-mediaWhite review-star thumbs_down thumbs_up

9 Questions for Analyzing the Tweet Stream

9 Questions for Analyzing the Tweet Stream Daniel Flamberg

Every marketer is professionally paranoid. We constantly worry about what’s being said about our brands. And with more than 25 million US adults regularly sharing advice, reviews and comments about products and services online posting 50 million tweets per day and nearly 1 in 5 site visits resulting from these conversations, who can blame us? We know that unbrokered conversations about our brands, our products, our services and our people are taking place among our customers and prospects. Not knowing what’s being said worries us. 



We fear the online backlash that virally spins out of control. We’re conscious of the hapless PR guy who sent out a press release bragging about his Twitter account and who, 447 Diggs later, was crowned “the biggest douche in social media.” We watched JetBlue get slammed and we are witnessing Toyota getting pilloried. 



Twitter is potentially an early-warning radar for brands. By monitoring the stream of 140 character missives, brands can ideally get a feel for what’s being said, understand competitive comparisons, potentially identify brand loyalists or opinion-makers worthy of extra care and attention and intervene before problems or negative comments cascade into real trouble. 



The two biggest outstanding issues are …




  • How do we effectively mine Twitter to reflect accurately what’s being said?

  • And what should brands do with the results?


PV Kannan, CEO of 24/7Customer, a global call center firm, has created a data-mining tool called 24/7Tweetview to help clients monitor, mine and respond to customer sentiment in near real-time. He claims to use “data mining and behavioral analytics to analyze Twitter comments for sentiment, tone, frequency and region” which in turn yields reports “for how the company measures up against other businesses as well as within their own customer base.” 



Working for typical call center clients – credit cards, telco and cable companies, financial services firms, technology manufacturers and retailers – this is a unique and differentiating value proposition. PV and his guys create a snapshot of what’s going on based on 5 to 6 hours of data mining by rating sentiment on a -1-0-+1 scale and by creating a “net emotion score” using a combination of open source data mining tools and home made analytical algorithms.  They create keyword libraries for their clients and focus on industry specific phrases (e.g. friends and family) to assess competitive prices, offers, channels, products and customer service issues. 



The key to this effort and to the Twitter mining concept is defining terms and setting the filters to collect, sort and assess millions of conversations. At present everyone from PV  to Radian6 to Dan Zarella does it their own way. 



If you doubt me, run a TweetPsych report on your own Twitter account.  Since my Twitter account is linked to my blog, 85% of my tweets are professionally oriented. According to TweetPsych, the language in my tweets indicates I post about “learning” 73% more than average. But since I rarely write about school or teaching this is a curious result. Similarly I write consistently about media, yet this tool must be programmed for different words ophrases since it indicates I tweet 17% less than average about media. 



The flattering, though bewildering stuff, is that I’m way below average on anxiety (-34%) control (-54%), primordial reptilian thoughts (-56%) and TMI or Self reference (-78%). You can see how these entertaining but artificial categories are off to a degree based on the unarticulated underlying programming assumptions. 



But keep in mind that under the best circumstances with the best tools (think NSA) we are unable to successfully mine jihadist chatter or identify inbound would-be Nigerian bombers. Nobody has really cracked the code on this yet. Marketers and social media gurus are just getting started. 



The next great leap will be to create common understandings about how to analyze the Twitter stream. This will all just be alchemy until the social media and marketing community  establish common definitions for tone, establish hypothetical thresholds for frequency, discuss ways to measure intensity and put forward best practices for weighing and reading the online tea leaves. 



And while there is some data and many apocryphal tales about real-time customer service using Twitter to reverse bad service or proactively delight customers, I suspect these are more random than not. 



Here are some of the outstanding questions I am sharing with those who are working with me in attempting to monitor, measure and respond to online conversations: 




  1. If Jeff Jarvis tweets that your brand sucks. Is that better or worse than 50 unknown tweets explicitly or implicitly expressing anger, disappointment, a sense of being ripped off or detailing service shortfalls?



  1. How do we weight intensity? It is specific language, overall vehemence of the tweet or should it also account for resonance (was it retweeted)? 



  1. How do we understand or process authority; some people know much more than others or have experience or insights that would give their opinion more credence or more weight?



  1. If 50 or 100 of these tweets come thru over a week, a month or a quarter, how serious should a brand take it and what level of action or intervention is required? How much tweeting action over what time period helps or hurts you?



  1. How do we separate out frequent tweeters and blabbermouths from thoughtful or opinion-leading tweeters?



  1. Is frequency enough of an indicator. If not how can we mine and measure the content of tweets?



  1. How much negative feedback is enough to cause genuine concern and prompt action? In highly transactional businesses someone is always complaining. Assuming that every brand has a few detractors, how much bad news or how many bad raps are necessary to call out the customer service or PR firemen?



  1. What is the interplay between brand advocates and loyalists tweeting in opposition to detractors?  Is there a baseline balance of online commentary that brands should expect? How much frequency or intensity is needed to prompt a specific response? How do you know when you’re really in trouble?



  1. How do you weight tweets? And is it really in the best interests of a brand to air or fix these situations in a public forum or should brand tweets direct disgruntled customers offline? 


Few marketers doubt the ultimate value of Twitter and other social media for uncovering customer sentiment and improving customer engagement. But we are in the early pioneer days of data-mining and everyone should be conscious that what we are dredging up might or might not be a clear or accurate reflection of what’s really going on. For the near term --stay paranoid.

Helping dominant brands extend their share and grow customer loyalty and helping insurgent and start-up brands capture attention, awareness and market share. Danny Flamberg has been building brands and building businesses for more than 25 years. He...

View full biography

Comments

to leave comments.