Market Research + Sentiment Analysis = New Insight

Q & A with Kate Niederhoffer, Knowable Research

Kate Niederhoffer photo

Posted October 24, 2012

Social psychologist Kate Niederhoffer founded Knowable Research after a career that has taken her from BuzzMetrics to the Nielsen Company, where she was VP of Measurement Science, to Dachis Group, which she also co-founded. She will present the keynote address, "Sentiment Driven Behaviors; Sentiment Driven Decisions," at the up-coming Sentiment Analysis Symposium, October 30, 2012 in San Francisco.

Kate is one of four authorities who graciously agreed to respond to a series of questions exploring the role of sentiment analysis, text analytics, and emerging social-intelligence technologies in support of next-generation market research. Read her responses to questions from Seth Grimes and then return to the full set of interviews.


Q1> What impact is sentiment analysis having in the market-research world, whether applied for surveys or to social media?

Kate Niederhoffer> Sentiment is having a huge impact, albeit split. One the one hand, it's a simple enough way (read: shortcut) to add some structure to the unstructured data that is social media and act as a KPI on many a marketer's dashboards. On the other hand, it's not hard enough-- not reliably or directly tied to behaviors, be they psychologically or financially significant; and the methodologies by which sentiment is calculated vary drastically leading to mistrust and confusion.

For many reasons I think of sentiment as a great gateway metric- it compels people to dive deeper beyond a pulse. In my opinion, it only gives researchers (be they managers, strategists, developers, etc.) a better sense of "market sentiment" when placed in context (e.g. over time, in comparison) or otherwise leads to more immersion in the data and validation via alternate datasets. I think it's important for people to remember that just because it's structure, it doesn't mean it's tied to meaningful behavior.

Q2> Could you describe one or two things you or your clients have learned, via automated text/sentiment analysis, that you wouldn't have discovered otherwise?

Kate> In general, my clients have learned to use sentiment as a gauge of when to go deeper and invest further-- in terms of time, more research, and different research. Some have learned this careful interpretation in unfavorable situations, e.g. a technology client I have was garnering consistently high positive sentiment, much higher than competitors. Pride put the metric on an executive dashboard, yielding false optimism. Many thought their sentiment profile was a leading indicator, but in fact there was no translation to business outcomes. Paradoxically, competitors who were yielding a fraction of their positive sentiment, in addition to a substantial amount of negative sentiment were benefitting from the critical acclaim ("engagement"). My client's was a more nuanced story - sentiment at the brand level didn't map onto purchasing decisions. To understand what motivates buying behavior, they had to cast a much wider, categorical net. The sentiment analysis then had to be focused on a much more specific aspect of consideration for the technology.

Sometimes it's easier... I have a healthcare client who uses sentiment to make snapshot decisions throughout the day regarding crisis intervention. For any given topic being tracked, she uses net sentiment to determine whether to invest in issuing a statement, clarify misconceptions, and/ or drive related content creation. There's no scaleable real-time solution that would allow her to be so malleable otherwise.

Some of the more interesting insights my clients learn come from differences in sentiment across different audience segments.

Q3> How consistent are research findings from social media and from surveys?

Kate> This is a difficult question to answer given the inherently exploratory possibilities of SM and pointed/ decisiveness of surveys. Instead, my favorite way in which they are consistent: the scale of social media decreases the probability of random deviation! In terms of a known inconsistency, I'm a huge proponent of bias in self report and believe in using multiple lenses on naturalistic data before jumping to the conclusion that "simply asking" is more accurate. While I believe in triangulation, I often find these varied methodologies shouldn't be used to answer the same research questions. Unfortunately, I haven't yet been able to systematically identify a set of attitudinal variables, for example, which are reliably better captured through survey OR social. If I went back to academia, I'd love to figure out these parameters!

Q4> Are there quality concerns in social-media MR, for instance due to not having representative sources or due to language and usage irregularities? And are there special advantages in social-media MR? >

Kate> Of course. There are shortcomings with every methodology including the most planful survey with rigorous statistical analysis, but you control for quality where you can-- and this is hugely important in selecting a listening vendor or sentiment provider. Spam and deception runs rampant online-- and is difficult to discern. Many checks need to be in place behind the scenes to ensure you're working with a quality data set. Metadata is also very promising to validate various dimensions of the data.

The most important advice in social media research-- as cliche as it may sound, is to know what you know and know what you don't know. This applies to the data quality issue above, and coverage more broadly. You need to be intimate with the data to know what/ who/where it represents and be highly vigilant of what you know you don't have access to, e.g. usage/buying/visiting patterns, rich profile information; in addition to what you simply can't say about the data.

There are also impressive advantages - its scale, its cost, its naturalistic context, its nuance, its analytic flexibility, its accessibility, its real-time nature, its spontaneity, its archive...


Meet Kate Niederhoffer and other market research innovators at the Sentiment Analysis Symposium, October 30, 2012 in San Francisco. For now, return to the full set of interviews.

Our Sponsors

Platinum

Bloomberg logo

Lexalytics logo

Accenture logo

Gold

Gnip logo

Bronze

Converseon logo

 

AA/OdinText logo

Media/Partner