Thick Data – How Science of Human Behavior will Augment Analytics Outcomes
Add Your Heading Text Here
In recent years, there has been a lot of hype around “big” data in the marketing world. Big data is extremely helpful with gathering quantitative information about new trends, behaviors and preferences, so it’s no wonder companies invest a lot of time and money sifting through and analyzing massive sets of data. However, what big data fails to do is explain why we do what we do.
“Thick” data fills the gap. Thick data is qualitative information that provides insights into the everyday emotional lives of consumers. It goes beyond big data to explain why consumers have certain preferences, the reasons they behave the way they do, why certain trends stick and so on. Companies gather this data by conducting primary and secondary research in the form of surveys, focus groups, interviews, questionnaires, videos and other various methods. Ultimately, to understand people’s actions and what drives them to your business (or not), you need to understand the humanistic context in which they pursue these actions.
Human Behavior vs Human Data
It’s crucial for successful companies to analyze the emotional way in which people use their products or services to develop a better understanding of their customers. By using thick data, companies can develop a positive relationship with their customers and it becomes easier for those companies to maintain happy customers and attract new ones.
Take for example Lego, a successful company that was near collapse in the early 2000’s because they lost touch with their customers. After failed attempts to reposition the company with action figures and other concepts, Jørgen Vig Knudstorp, CEO of the Danish Lego firm, decided to engage in a major qualitative research project. Children in five major global cities were studied to help Lego better understand the emotional needs of children in relation to legos. After evaluating hours of video recordings of children playing with legos, a pattern emerged. Children were passionate about the “play experience” and the process of playing. Rather than the instant gratification of toys like action figures, children valued the experience of imagining and creating. The results were clear; Lego needed to go back to marketing its traditional building blocks and focus less on action figures and toys. Today, Lego is once again a successful company, and thick data proved to be its savior.
While it’s impossible to read the minds of customers, thick data allows us to be closer than ever to predicting the quirks of human behavior. The problem with big data is that companies can get too caught up in numbers and charts and forget the humanistic reality of their customers’ lives. By outsourcing our thinking to Big Data, our ability to make sense of the world by careful observation begins to wither, just as you miss the feel and texture of a new city by navigating it only with the help of a GPS.
The Perils of Big Data Exceptionalism
As the concept of “Big Data” has become mainstream, many practitioners and experts have cautioned organizations to adopt Big Data in a balanced way. Many qualitative researchers from Genevieve Bell to Kate Crawford and danah boyd have written essays on the limitations of Big Data from the perspective of Big Data as a person, algorithmic illusion, data fundamentalism, and privacy concerns respectively. Journalists have also added to the conversation. Inside organizations Big Data can be dangerous. People are getting caught up on the quantity side of the equation rather than the quality of the business insights that analytics can unearth. More numbers do not necessarily produce more insights.
Another problem is that Big Data tends to place a huge value on quantitative results, while devaluing the importance of qualitative results. This leads to the dangerous idea that statistically normalized and standardized data is more useful and objective than qualitative data, reinforcing the notion that qualitative data is small data.
These two problems, in combination, reinforce and empower decades of corporate management decision-making based on quantitative data alone. Corporate management consultants have long been working with quantitative data to create more efficient and profitable companies.
Without a counterbalance the risk in a Big Data world is that organizations and individuals start making decisions and optimizing performance for metrics — metrics that are derived from algorithms. And in this whole optimization process, people, stories, actual experiences, are all but forgotten. By taking human decision-making out of the equation, we’re slowly stripping away deliberation — moments where we reflect on the morality of our actions.
Where does Thick Data come from ?
Harvard Business Review (HBR) defines thick data as a tool for developing ‘hypotheses’ about ‘why people behave’ in certain ways. While big data can indicate trends in behavior that allow marketers to form hypotheses, thick data can fill in the gaps and allow marketers to understand why their customers are likely to take certain actions.
While ‘thick data’ is recently receiving a great deal of attention among big data thought leaders, it’s not a new concept. There’s little difference between ‘thick’ data and ‘prescriptive analytics,’ both of which represent advanced maturity in marketing big data. By shifting your focus from predictive big data to forming and testing hypotheses, marketers can better understand how their buyers will act in the future.
Historically, big data has been transactional, while thick data has been qualitative. For data-driven brands of years past, insights into consumer behavior were typically derived from behavioral observation, voice of the customer (VOC) or Net Promoter Score (NPS) surveying, focus groups, or other time-intensive research methods.
Today, insights into consumer behavior can come from a variety of sources. Thanks to social media, internet of things technologies and other drivers of big data, marketers can gain insight into why humans act the way they do with data sources such as:
- Online or Mobile Behavior
- User-generated social media content
- 3rd-party transactional data
Studies indicate that currently, 95% of brand research into consumer preferences is performed manually, using methods such as surveying or focus groups. However, in an era where consumers produce thousands of insights each day from mobile usage, online shopping and social media updates, the insights are easy to obtain.
Finally, will Thick Data take over Big Data ?
This is not to say big data is useless. It is a powerful and helpful tool companies should invest in. However, companies should also invest in gathering and analyzing thick data to uncover the deeper, more human meaning of big data. Together, thick data and big data give you an incredibly insightful advantage.
Related Posts
AIQRATIONS
AI & Humanity – Existential Threat or Co-exist Attainability?
Add Your Heading Text Here
While some predict mass unemployment or all-out war between humans and artificial intelligence, others foresee a less bleak future. A future looks promising, in which humans and intelligent systems are inseparable, bound together in a continual exchange of information and goals, a “symbiotic autonomy.” If you may. It will be hard to distinguish human agency from automated assistance — but neither people nor software will be much use without the other.
Mutual Co-existence – A Symbiotic Autonomy
In the future, I believe that there will be a co-existence between humans and artificial intelligence systems that will be hopefully of service to humanity. These AI systems will involve software systems that handle the digital world, and also systems that move around in physical space, like drones, and robots, and autonomous cars, and also systems that process the physical space, like the Internet of Things.
I don’t think at AI will become an existential threat to humanity. Not that it’s impossible, but we would have to be very stupid to let that happen. Others have claimed that we would have to be very smart to prevent that from happening, but I don’t think it’s true.
If we are smart enough to build machine with super-human intelligence, chances are we will not be stupid enough to give them infinite power to destroy humanity. Also, there is a complete fallacy due to the fact that our only exposure to intelligence is through other humans. There are absolutely no reason that intelligent machines will even want to dominate the world and/or threaten humanity. The will to dominate is a very human one (and only for certain humans).
Even in humans, intelligence is not correlated with a desire for power. In fact, current events tell us that the thirst for power can be excessive (and somewhat successful) in people with limited intelligence.
You will have more intelligent systems in the physical world, too — not just on your cell phone or computer, but physically present around us, processing and sensing information about the physical world and helping us with decisions that include knowing a lot about features of the physical world. As time goes by, we’ll also see these AI systems having an impact on broader problems in society: managing traffic in a big city, for instance; making complex predictions about the climate; supporting humans in the big decisions they have to make.
Intelligence of Accountability
A lot of companies are working hard on making machines to be able to explain themselves — to be accountable for the decisions they make, to be transparent. A lot of the research we do is letting humans or users query the system. When Cobot, my robot, arrives to my office slightly late, a person can ask , “Why are you late?” or “Which route did you take?”
So they are working on the ability for these AI systems to explain themselves, while they learn, while they improve, in order to provide explanations with different levels of detail. People want to interact with these robots in ways that make us humans eventually trust AI systems more. You would like to be able to say, “Why are you saying that?” or “Why are you recommending this?” Providing that explanation is a lot of the research that is being done, and I believe robots being able to do that will lead to better understanding and trust in these AI systems. Eventually, through these interactions, humans are also going to be able to correct the AI systems. So they are trying to incorporate these corrections and have the systems learn from instruction. I think that’s a big part of our ability to coexist with these AI systems.
The Worst Case Contingency
A lot of the bad things humans do to each other are very specific to human nature. Behavior like becoming violent when we feel threatened, being jealous, wanting exclusive access to resources, preferring our next of kin to strangers, etc were built into us by evolution for the survival of the species. Intelligent machines will not have these basic behavior unless we explicitly build these behaviors into them. Why would we?
Also, if someone deliberately builds a dangerous and generally-intelligent AI, other will be able to build a second, narrower AI whose only purpose will be to destroy the first one. If both AIs have access to the same amount of computing resources, the second one will win, just like a tiger a shark or a virus can kill a human of superior intelligence.
In October 2014, Musk ignited a global discussion on the perils of artificial intelligence. Humans might be doomed if we make machines that are smarter than us, Musk warned. He called artificial intelligence our greatest existential threat.
Musk explained that his attempt to sound the alarm on artificial intelligence didn’t have an impact, so he decided to try to develop artificial intelligence in a way that will have a positive affect on humanity
Brain-machine interfaces could overhaul what it means to be human and how we live. Today, technology is implanted in brains in very limited cases, such as to treat Parkinson’s Disease. Musk wants to go farther, creating a robust plug-in for our brains that every human could use. The brain plug-in would connect to the cloud, allowing anyone with a device to immediately share thoughts.
Humans could communicate without having to talk, call, email or text. Colleagues scattered throughout the globe could brainstorm via a mindmeld. Learning would be instantaneous. Entertainment would be any experience we desired. Ideas and experiences could be shared from brain to brain.
We would be living in virtual reality, without having to wear cumbersome goggles. You could re-live a friend’s trip to Antarctica — hearing the sound of penguins, feeling the cold ice — all while your body sits on your couch.
Final Word – Is AI Uncertainty really about AI ?
I think that the research that is being done on autonomous systems — autonomous cars, autonomous robots — it’s a call to humanity to be responsible. In some sense, it has nothing to do with the AI. The technology will be developed. It was invented by us — by humans. It didn’t come from the sky. It’s our own discovery. It’s the human mind that conceived such technology, and it’s up to the human mind also to make good use of it.
I’m optimistic because I really think that humanity is aware that they need to handle this technology carefully. It’s a question of being responsible, just like being responsible with any other technology every conceived, including the potentially devastating ones like nuclear armaments. But the best thing to do is invest in education. Leave the robots alone. The robots will keep getting better, but focus on education, people knowing each other, caring for each other. Caring for the advancement of society. Caring for the advancement of Earth, of nature, improving science. There are so many things we can get involved in as humankind that could make good use of this technology we’re developing.