Life can be tough for mobile app developers.
After creating an app that helps users book a hotel room or redeem loyalty points, they need to figure out how well the app works - and how it stacks up against competitors. When a customer writes an indignant online review saying "I can't scroll right!" and gives the app only one star, developers must fix the problem, and fast.
But pinpointing exactly why users are dissatisfied, based on several thousand short online reviews, is labor-intensive, time-consuming and expensive, and requires multiple steps. And the stakes are high. Mobile apps that give customers a bad experience can damage the company's brand, alienate rewards customers and increase defections to competitors.
A Cornell statistician and his colleagues have found a faster way for developers to improve mobile apps, with a new text-mining method that aggregates and parses customer reviews in one step.
"The idea was, can you devise a method that would look through all the ratings, and say these are the topics people are unhappy about and this is maybe where a developer should focus," said Shawn Mankad, assistant professor of operations, technology and information management in the Samuel Curtis Johnson Graduate School of Management.
The idea could have significant implications for mobile commerce, which is expected to reach $250 billion by 2020. Through the increasing prevalence of smartphones, mobile commerce has already started to significantly influence all forms of economic activity, according to Mankad and his colleagues.
Mankad is lead author of "Single Stage Prediction with Embedded Topic Modeling of Online Reviews for Mobile App Management," which will appear in an upcoming issue of the Annals of Applied Statistics. Mankad's co-authors are Cornell doctoral candidate Shengli Hu and Anandasivam Gopal of the University of Maryland.
The paper is one of several Mankad has written with a $525,000 grant from the National Science Foundation. The initial goal was to create new statistical tools to monitor the stability of the financial system.
In the latest study, Mankad and his colleagues applied those tools to the mobile apps problem.
In text mining, a common way to represent texts is to construct a huge matrix to keep track of which words appear in which online review. "It becomes a really wide matrix. And you have so many columns that you need to shrink them down somehow," Mankad said. "So that's where we're applying the method."
The model, in effect, takes a weighted average of words that appear in online reviews. Each of those weighted averages represents a topic of discussion. The method not only provides guidance on a single app's performance but also compares it to competing apps over time to benchmark features and consumer sentiment.
"The idea is you take the text, you take the ratings, and it just outputs these dashboards that you can look at," Mankad said.
They applied their approach to both simulated data and more than 104,000 mobile reviews of 162 versions of apps from three of the most popular online travel agents in the United States: Expedia, Kayak and TripAdvisor. There were more than 1,000 reviews per app per year.
Mankad and his colleagues found that their text-mining model performed better than the standard methods at forecasting accuracy on both real reviews and simulated data. And they found that the method can help companies weigh the pros and cons of how frequently they release new versions of their apps.
"In text mining, there is a super popular class of methods based on Bayesian modeling. The field can get dogmatic about what technique to use," Mankad said. "In this paper, we're doing something different by trying a matrix factorization method. To me, it's OK to try a new method when you think it may have an advantage in certain situations."
© Copyright 2024 Mobile & Apps, All rights reserved. Do not reproduce without permission.