Interview with the Queen of Research, Maria Cipollone – The Business of Social Games and Casino

Several weeks ago, I wrote a blog post about the perils of market research and surveys. A former colleague and one of the smartest researchers I ever met, Maria Cipollone, highlighted that my post focused on the risks of (bad) research while there are some fantastic opportunities to apply user research to make better games. This argument resonated with me as I completely agree that more data is better than less, and integrating qualitative and quantitiative information will lead to better products.

Given the success Maria and I enjoyed working together in the past, I invited her to discuss when and how to use different research tools. Below is our conversation:

Lloyd: Hi Maria, thanks for joining me today. Let’s start by discussing my recent blog posts, where I highlighted some of the challenges with market research and you accurately pointed out some issues with my rationale. I think at the core I was somewhat dismissive of surveys, and equated them with market research, but there is so much more that can be done to understand customers.

Maria: So, surveys often get misused because they’re usually one of the few research tools available to organizations. They pervade our culture; every consumer is used to getting a feedback survey. However, they’re really only good at measuring the following: (1) someone’s attitude about a brand/product (2) someone’s perception of their own experience (e.g., was it negative or positive) (3) their perception of an experience based on “word of mouth” or public opinion (e.g., WOMI or NPS).

Many people use surveys to: (1) Try and predict what a consumer/player will do in the future (2) Have consumers/players report on their behavior in the past. This is where surveys go wrong. If you are asking your consumer/player/user to self report the frequency of their behavior, or predict the likelihood that they’ll do something in the future, then you’re collecting bad data (spurious).

Lloyd: What would be a good use case then in your first scenario, using what it is good for? What would be a use case where a customer’s perception could provide actionable insights?

Maria: NPS surveys are good for finding out brand perception and the impact of experience on brand perception; if measured correctly. You would measure the score, but more importantly you would collect open-ended feedback on why that is so. For example, let’s say you have a casino game that plays ads every time the player wins. The player gives an NPS score of 2 (detractor) and says, I hate how you interrupt my winning celebration with an ad. You might move the ads to when the player loses, and re-survey the same cohort to see if it impacts NPS more positively.

I have an even better example. In product research, we use a survey called a Kano analysis; where we ask customers to identify which features are important to them in a product. In the example of the Casino game, do players want to give gifts? Do they want credits for watching ads?

This Kano analysis allows us, via survey, to look at the relative importance of features, according to player perception. That way, we can see which features are perceived as “must have” or “delighters”
Then, dev and engineering can concentrate their efforts on those features

Lloyd: How can you tell if the player actually knows. They might think they want gifts but actually want credits?

Maria:  We can trust what the player knows for two reasons:

But your question brings up another issue I’d like to address, market research v user research. If you’re trying to develop a product; or features you need user research to implicitly observe what users “want”.

In the example of the gifts/credits; if you suspect that users are saying one thing but wanting another — that’s a problem space issue. You need to observe qualitatively the use of the product as realistically as possible. This can done with rapid ethnography; user interviews; contextual interview, etc.). This is user research — where we observe the use of the product in its natural habitat to balance what users say against what they actually do. On top of this, you need metrics to measure against all of this.

Market research measures what consumers will buy; and what attitude about the purchase.

User research measures/observes what players will do, and what tools are needed to accomplish that (UX/UI design).

Lloyd: Makes sense but before delving further into user research (which to me is the biggest opportunity), one last question about surveys/market research. Some of the feedback I got from my post was that surveys largely fail due to being worded poorly. My gut reaction to that is sort of my reaction to the argument that socialism has only failed because it is has never been implemented as intended. Is it utopian to expect your surveys (or election polls) to be designed in a non-leading way (unless you have an awesome researcher, which not everyone has) to be designed well or are most surveys going to be flawed and a reason to use other research tools.

See also  Meet the Ethiopian startup selling African jewellery to the world

Maria: Haha about socialism; that’s less of a linguistic problem and more of a human problem. But I definitely have a response.

Surveys can fail for a lot of reasons: survey design is definitely one of them. You can pick up biases, because people tend to agree, but there are simple ways to get around that (e.g., always write scales from negative— positive and stay away from agreement).

But they can also fail if:

There are many reasons surveys fail, and biases are popular, but I think they fail because of #1, most
people use them in the wrong way. It’s like trying to use an X-Ray to look at muscle tissue. You need an MRI for that.

Lloyd: Thanks, I’d agree with that analysis. Now onto the really interesting thoughts related to user research. While everyone is familiar with surveys, can you discuss further what are the different user research tools and options? You touched on it above about observing customers in their natural habitat (sort of like Blue Planet) and I remember that was one of your super-powers when we worked together, the value of going to a player’s house and watching them experience the game. Can you delve deeper into how that is done and other types of user research maybe even some rough rank or what should be used when?

Maria: Of course– and thanks for the opportunity. I’ll list some popular methods and what they’re good for, then I’ll discuss the future of UX research as I see it; it’s certainly changing 🙂

Lloyd: Perfect

Maria:

This is when you are trying to uncover problems for your current product, or even a new product to solve. Good products solve problems– not just occupy a market space. Too often, at least in tech, we introduce a technology with no problem. (Look at Voice Assistants for a good example of no problem space).

Lloyd: I can think of a few examples of tech that doesn’t solve a problem and then fails, a few billion later 🙂 (maybe Samsung’s new foldable phone).

Maria: You could optimally do #2 as Rapid ethnography, where you go in a home/office/environment to conduct the interviews.

Lloyd: What’s the benefit of going into a home than doing it in a lab?

Maria: Home/Environment is always the optimal scenario because that’s how it gets used in the real world. For example, in slots; it’s important to have visual and haptic feedback, because players often sit back with the game.

The more feedback the UX gives them; the more likely they are to turn their attention from the TV or whatever is distracting them back to the game. But knowing the environment that the game/product gets used in is really important to UX design.

Lloyd:Pardon my ignorance, but “haptic feedback”?

Maria: My fault. Haptic feedback is a vibration or buzz that the product emits; like when a game controller rumbles or a phone vibrates.

Visual feedback would be fireworks; flashes, etc

Lloyd: And they would get a difference experience in a lab (or your office) than in their natural environment?

Maria: In the lab, it’s superficial. The player is paying sole attention to the game because I’m watching the player, and they want to do a good job (performance bias). In the home, I get the real deal; the dog is barking; the TV is on– I know that their attention gets pulled away from the game. And part of my actionable insight can be to redirect their attention to the game via visual/haptic/sound feedback (although most players play w sound off).

But, if you’re short on money, there’s ways to interview users in context–even remotely; with tools like UserTesting.com

Lloyd: And that was going to be my next question, where do those tools fit in?

Maria: From what I said, i would take it as watch the user experience the product, if possible, in their home (if it’s a game normally played at home), then using a tool like UserTesting.com and last in your office/lab.

Lloyd:Other than observing the player, are there other user research techniques or tools worth incorporating into your playbook?

Maria: Yes– I actually like to do quantitative UX research– methods include surveys, variant (AB) testing, and behavior modeling.

Lloyd: Would you mind elaborating on each?

Maria: I use surveys to benchmark experience. For example, before we do major game OS updates, I might send out a survey and ask players to rate how perceptively slow the game seems.
Then, I would follow up after the OS update to see if perception shifted, and the game seems faster; more colorful.

See also  Karan Thapar on how an interview soured his relationship with Chidambaram | Business Standard News

Here, perception is important — because a happy player is likely to give a good app store rating for a game that seems fast; bright; fun.

You can als measure the psychological impact of your OS improvements.

AB testing we use to measure small changes– copy changes (e.g., changing the word “Buy Now” to “Purchase Coins” and see if that creates an upsell/lift)

Behavior modeling we do to sharpen metrics and KPIs

What do we know from user interviews that might help us understand player engagement metrics?

How can we develop hypotheses from the interviews that we can model in the metrics. For example, we observe that users quit when they are asked to purchase every time they run out of coins. So, we look at our engagement metrics, and see if we can link a drop in session length related to the purchase flow timing. Maybe we switch it up; floating the purchase flow after a win, or at the open of the app.

I think this is the future of UX research. That we will use observed insight — from small sample to understand our users at metrical scale. The future of UX research is assisting data science; machine-learning practitioners understand what they’re seeing at scale. Eventually, this will help build algorithms

Lloyd: That’s really an interesting approach.

One last question around two buzz phrases: focus groups and personas. I have strong thoughts on both but do not want to create a leading question.

Maria:Sure!

Focus groups are misused like surveys are. I’m not a practitioner. I guess the nicest thing I can say is if you’re looking for group speak on about the conception of a brand or experience, then go ahead. I find that product developers like them because they make them feel better. It’s a bunch of enthusiasts; speaking enthusiasm.

Lloyd: I’ve seen that

Maria: But, it can be useful for some; I think they can be very misleading. Like a COVID test you can do cheap and quickly but has a high false positive rate. I’m not a good representative of the method haha.

Lloyd: It may be confirmation bias but it is what I was hoping you’d say.

Maria: I’ve been forced to do focus groups, mostly to satisfy power.

Personas can be very valuable, but you need to do a few things to get them right.

Lloyd: Tell me more

Maria:

Persona is a framework for design. A segment is a portion of an audience.

Lloyd: What’s the value of personas over just segmentation?

Maria: Personas help designers inhabit the psychology of the player/user so they can make a better user experience.

Segments help product people go after an audience or understand the share of the audience the persona occupies.

Personas are for designers.

Segments are for the rest of us.

Lloyd: My concern with personas is are we over-simplifying (i.e. stereotyping) players? It sounds like the profiles that authorities sometimes abuse?

Maria: Ah, that’s a good concern — and I’ve had many conversations about the implicit biases that get produced in persona. If you ground them in qualitative research (e.g., 10-20 interviews); where you make sure your sample is diverse, you can protect against that issue. And, you must always preach that people shift in and out of persona. For example, I’m a casual gamer with slots; but not poker!

Lloyd: Thanks so much Maria, I found our chat really enlightening. Before we end, anything you want to add?

Maria: Research is worth its weight in gold! Let’s do it again AND THANK YOU!

I am GM of Chumba at VGW, where I lead the Chumba Casino team. Previously, I was Director of StarsPlay, the social gaming vertical for the Stars Group. I was also Sr Dir at Zynga’s social casino (including Hit It Rich! slots, Zynga Poker and our mobile games), where I led VIP CRM efforts and arranged licensing deals. I have been a central part of the senior management team (CCO, GM and CGO) at three exits (Merscom/Playdom, Playdom/Disney and Spooky Cool/Zynga) worth over $700 million.

This content was originally published here.