The next
BriefingsDirect Voice of the Customer discussion explores how social gaming company
Playtika uses big-data analytics to deliver captivating user experiences and engagement.
We'll
learn how feedback from massive user action streams can be analyzed in bulk
rapidly to improve the features and attractions of online games and so help Playtika react well in an agile market.
To learn more about leveraging big data in the social casino industry, we're pleased to welcome
Jack Gudenkauf, Vice President of Big Data at Playtika in Santa Monica, California. The discussion is moderated by me,
Dana Gardner, Principal Analyst at
Interarbor Solutions.
Here are some excerpts:
Gardner: I understand that you're part of
Caesars Interactive Entertainment and that you have a number of online games. Tell us about Playtika.
Gudenkauf:
We have a few free-to-play social casino games. In fact, we're the
industry leader. We have maybe 10 games at this point. World Series of
Poker, which you've probably heard about, Slotomania, House of Fun,
Bingo Blitz, a number of studios combined.
Worldwide, we're about 1,000 employees. As I say,
we're the industry leader in this space at this moment. And it's a very
challenging space, as you might imagine, just within gaming itself. The
amount of data is huge, especially across all of these games. Collecting
information about how the users play the game and what they like about
the game, is really a completely data-driven experience.
If
we release a new feature, we get feedback. Of course, it’s social
gaming as well. If we find out that they don't like the feature, we have
to rev the game pretty quickly. It's not like the old days, where you
go away for a year or so, and come out with something that you hope
people like -- Halo, or something like that. It's more about the users
driving the experience and what they enjoy.
So we'll
try something with some content or something else and see if they like
this feature or functionality. If the data comes back immediately that,
as they do the slot spin and they have a new version of the game and
they're clearly not playing, we literally change the game.
In
fact, in the Bingo Blitz game, we will revise the game as often as once
a week, if you can imagine that. So we have to be pretty agile. The
data completely drives the user experience as well. Do they like this,
do they not like this, shall we make this game change?
Data-driven environment
It’s a complete data-driven environment. That's what brought me there. I came from
Twitter, where we used very big data, as you might imagine, with Hewlett Packard Enterprise (HPE)
Vertica and
Hadoop
and such, but it was more about volume there. Here it’s about variety,
velocity, and changing game events across all of our studios.
You
can imagine the amount of data that we have to crunch through, do
analytics on, and then get user feedback. The whole intention is to get
feedback sooner so that we can change the game as rapidly as possible,
so that users are happy with the game.
So it’s
completely user-driven as far as kind of the experience and what they
enjoy, which is fun and makes it challenging as well.
Gardner:
So being a data scientist in this particular organization gives you a
pretty important place at a major table. It's not something to think
about at the end of the month when we run some reports. This is
essential and integral to the success of the company?
Gudenkauf: Of course, we do analyze the data for daily, monthly, and general
key performance indicators (KPIs),
daily active users or monthly active users, those types of things. But
you're absolutely right. With the game events themselves, we need to
process the data as quickly as possible and do the analysis. So
analytics is a huge part of our processing.
With the user experience and what they enjoy and the free to play, in particular, the demand is pretty high.
We
actually have a game economy as well, which is kind of fascinating. If
you think of it in terms of the US economy, you can only have so much
money in the economy without having inflation and deflation. Imagine if I
won all the money and nobody else could have money to play with. It’s
kind of game over for us, because they can’t play the game anymore. So
we have to manage that quite well.
Of course, with the
user experience and what they enjoy and the free to play, in particular,
the demand is pretty high. It’s like with apps that you pay for. The
99-cent apps are the ones that people think the most about.
When
somebody is spending a dollar, it's very important to them. You want
the experience to be a great experience for them. So the data-driven
aspects of that and doing the analysis and analytics of it, and feeding
that back to the game is extremely important to us. The velocity and the
variety of games and different features that we have and processing
that as fast as possible is quite a challenge.
Gardner:
Now, games like poker, slots, or bingo, these are games that have been
around for decades, if not hundreds of years, and they've had a new life
online in the past 15 years, which is the Dark Ages of online gaming.
What's new and different about games now, even though the game is
essentially quite familiar to people? What's new and different about a
social casino game?
Social aspect
Gudenkauf:
I've thought about that quite a bit. A lot of it has to do with the
social aspect. Now, you can play bingo, not just with your friends at
the local club, but you can play with people around the world.
You
can share items and gifts, and if you are running low on money, maybe
you can borrow some from your friends. And you can chat with them. The
social aspect just opened up all kinds of avenues.
In
our case, with our games in the studios, because they're familiar, they
stand the test of time. Take something like a bingo or slots, as opposed
to some new game that people don't really understand. They may like it.
They may only like it for a while. It’s like playing Scrabble or
Monopoly with your family. It's a game that's just very familiar and
something you enjoy playing.
But, with the online and the social aspect of it, I explain it to other people as imagine
Carmen Sandiego
meets bingo. You can have experiences where you're playing bingo, you
go on this journey to Egypt, and you're collecting items and exploring
Egypt, trying to get to another thing. We can take it to places that you
wouldn't normally take a traditional kind of board game and in a more
social aspect.
So you extract that data as usual and then you transform it. You
reshape it and change it around a little bit to put it in a format to
get it into a data warehouse like Vertica.
Gardner:
So this really appeals to what's conceived of as entertainment in
multiple ways for an individual. Again, as you established, the analysis
and feedback loops are really important.
I understand
why doing great data analysis is so important to this particular use
case. Tell us a little bit about how you pull that off. What sort of
data architecture do you have? What sort of requirements do you have?
What are the biggest problems you have to overcome to achieve your
goals?
Gudenkauf: If you think about the
traditional way of consuming data and getting it into a reporting
system, you have an extract. You're going to bring in data from
somewhere, and of course, in our case it’s from mobile devices, the web,
from playing on Facebook. You have information about how much money did
you spend, and user behavior. Did they like it?
So you
extract that data as usual, and then you transform it. You reshape it
and change it around a little bit to put it in a format to get it into a
data warehouse like Vertica.
Once you get it into HPE Vertica, you have the
extract, transform, and the load (ETL), the traditional model. You load it into Vertica and then you do your analysis there, where you can do
SQL,
JOINs, and analytics over it.
A
new industry term that I'm coining is what we call Parallelized
Streaming Transformation Loader (PSTL) instead of ETL. This is about
ingesting data as fast as possible, processing it, and making analytics
available through the entire data pipeline, instead of just in the data
warehouse.
Real-time streaming
Imagine, instead of the extract, we're taking real-time streaming data. We're reading, in our case, off a
Kafka queue. Kafka is very robust and has been used by LinkedIn and Twitter. So it’s pretty substantial and scalable.
We
read the messages in parallel as they're streaming in from all the game
studios, certain amounts of data here and there, depending on how much
we do with the particular studio. With Bingo Blitz, in our case, we
consume a lot more user behavior than say some of the other studios.
But
we ingest all the data. We need to get it in in real-time streaming. So
we read it in in parallel. That’s the parallel part and the streaming
part. But then we take it from the streaming, and instead of extracting,
it's being fed into us.
Then we do parallel transformations in
Spark and our Hadoop cluster. Think of it as bringing in a bunch of
JSON event data, we are putting it into an in-memory table that’s distributed in Spark.
Then,
we do parallel transformations, meaning we can restructure the data, we
can do transforms from uppercase, lowercase, whatever we need to do.
But it's done in parallel across the cluster as well. Where,
traditionally, there's a single monolithic app that was running, we
could run independent to the extract of the load.
We have so much data that we need to also do the transformations in parallel. We do that in what are called
Resilient Distributed Datasets (RDDs).
It’s kind of a mouthful, but think of it as just a bunch of slices of
data across a bunch of computers and your nodes, and then doing
transforms on that in parallel. Then, something that has been a dream of
mine is how to get all that data in parallel at the same time into HPE
Vertica.
HPE Vertica does a great job of doing
massive parallel processing (MPP)
and all that means is running the query and pulling data off of
different nodes in the cluster. Then, maybe you're grouping by this and
you are summing this and doing an average.
But, to date
they hadn't had something that I tried to do when I was at Twitter, but
managed to pull off now, which is to load the data in parallel. While
the data is in memory in Spark and distributed datasets, we use the
Vertica Hash function that will tell us exactly where the data will land
when we write it to a Vertica node.
We can say, User
A, if I were to write this to Vertica, I know that it’s going to go on
this machine. User B will go to the next machine. It just distributes
the load, but we,
a priori, hash the data into buckets, so that
we know, when we actually write the data, that it goes to this node.
Then, Vertica doesn’t have to move it. Usually you write it to one node
and it says, "No, you really belong over here," and so it asks you to
move it and shuffle, like a traditional
MapReduce.
Working with Vertica
So we created something in conjunction with the Vertica developers. We announced it. That part of it is kind of a
TCP server
aspect that we extend in the Copy command that exist in Vertica itself.
We literally go from streaming in parallel, reading into in-memory data
structures, do the transformations, and then write it directly from
memory into our Vertica data warehouse.
That allows us
to get the data in as fast as possible from streaming right to the
right. We don’t have to hit a disk along the way and we can do analytics
in Vertica sooner. We can also do analytics in Hadoop clusters for
older data and do machine learning on that. We can do all kinds of
things based on historical user behavior.
If we're
doing a sale or something like that, how well is it resonating compared
to the past. What we're doing is pushing the envelope to push the
analytics as close as we can up to the actual game itself.
As
I said, traditionally, you do the analytics, get the feedback, change
the game, release it in a week, etc. We're going to try to push that all
the way up to be as near real time as we can. Basically, the PSTL
pipeline, allows us to do that, do analytics, and tighten that loop down
so that we can get the user behavior to the user as fast as possible.
Once you have it in as fast as you can, reshaping it while it’s in
memory, which of course is faster, and taking advantage of doing the
parallel transformations at the same time, and in the parallel loading
as well, it’s just a way more optimized solution.
Gardner:
It’s intriguing. It sounds as if you're able, with a common
architecture, to do multiple types of analysis readily but without
having to reshuffle the deck chairs each time. Is that fair?
Gudenkauf:
That's exactly right. That’s the beauty of this model and why I'm
putting up more prescriptive guidance around it. It changes the paradigm
of the traditional way of processing data.
We announced some benchmarking. Last year at the HPE Big Data Conference, Facebook stole the show with 36
terabytes
an hour on 270 machines. With our model, you could do it with about 80
machines. So it scales very well. Some people say, "We're not Twitter or
Facebook scale, but the speed at which we want to consume the data and
make it available for analytics is extremely important to us."
The
less busy the machines are, the more you can do with them. So does it
need to scale like that? No, we are not processing as much data, but the
volume, velocity, and variety is a big deal for us. We do need to
process the volume, and we do have a lot of events. The volume is not
insignificant. We're talking about billions of events, mind you. We're
not on the sheer scale of say Twitter or Facebook, but the solution will
work for both, in both scenarios.
Gardner: So,
Jack, with this capability analysis as close to real-time with the
volume and the variety that you are able to accomplish, while this is a
great opportunity for you to react in a gaming environment. you're also
pushing the envelope on what analysis and reaction can happen to almost
any human behaviors at scale. In this case, it happens to be gaming, but
there are probably other applications for this. Have you thought about
that or are there other places you can take it within an interactive
entertainment environment?
All kinds of solutions
Gudenkauf:
I can imagine all kinds of solutions for it. In fact, I've had a number
of people come up to me and say, "We're doing this Chicago Stock
Exchange, and we have a massive amount of streaming-in data. This is a
perfect solution for that."
I've had other people come
in to talk to me about other aspects and other games as well that are
not social casino genre, but they have the same problem. So it's the
traditional problem of how to ingest data, massage it, load it, and then
have analytics through that entire process. It’s applicable really in
any scenario. That’s one of the reasons I'm so excited about the PSTL
model, because it just scales extremely well along the way.
Gardner:
Let’s relate this back to this particular application, which is higher
entertaining games that react, and maybe even start pushing envelope
into anticipating what people will want in a game. What’s the next step
for making these types of games engaging? I'm even starting to toy with
the concept of
artificial intelligence (AI),
where people wouldn’t know that it’s a game. They might not even know
the difference between the game and other social participants. Are we
getting anywhere close to that?
Looking at historical data and doing machine learning, we can make better determinations of games and game behavior.
Gudenkauf:
You're thinking extremely clearly on the spectrum in analytics in
general. Before, it was just general reporting in the feedback loop, but
you're absolutely right. As you can see, it’s enabled through our model
of prescriptive analytics. Looking at historical data and doing machine
learning, we can make better determinations of games and game behavior
that will drive the game based on historical knowledge or incoming data
that’s more predictive analytics.
Then, as you say,
maybe even into the future, beyond predictive and prescriptive
analytics, we can almost change as rapidly as possible. We know the user
behavior before the user knows the behavior. That will be a great
world, and I'm sure we would be extremely successful to get to that
final spectrum. But just doing the prescriptive analytics alone, so that
the user is happy with the game, and we can get that back to them as
quickly as possible, that’s big in and of itself.
Gardner: So maybe a new game some day will be pass the
Turing Test, you against our analysis capabilities?
Gudenkauf:
Yeah, that would be pretty cool. Maybe eventually it will tie into the
whole virtual reality. It’s kind of happening based on the information
behaviors immediately. That will be neat.
Gardner:
Very exciting world coming our way, right? We're only scratching the
surface. I guess I have run out of questions because my mind is reeling
at some of these possibilities.
One last area though.
For a platform like HPE Vertica, what would you like to see them do
intrinsic to the product? We have the announcement recently about the
next version of Vertica, but what might be on your list, a wish-list if
you will, for what should be in the product to allow this sort of thing
to happen even more readily?
Influencing the product
Gudenkauf:
That’s one of the reasons we go to conferences. It’s one of the few
conferences where you can get to the actual developers or professional
services and influence the product itself.
One of the
reasons why I like to be on the leading edge or bleeding edge is so that
we can affect product development and what they are working on. I've
been fortunate enough to be able to work with developers and people
internal to HPE Vertica for quite a while now. I just love the product and I
want to see it be successful. With the adoption and their more openness
of working with open source like Spark and MapReduce, the whole
ecosystem works well together, as opposed to opposing each other, which I
think is what most people think. It’s a very collaborative, cooperative
environment especially through our pipeline.
I really
like the fact that when I talk about things like Kafka and the PSTL, and
that Spark is a core part of our architecture, now we're having
conversation, and lots of them, to help Vertica and influence them to
invest more in Spark and the interaction between Vertica data warehouse,
Spark, and that eco-system from Kafka.
One of the reasons why I like to be on the leading edge or bleeding edge
is so that we can affect product development and what they are working
on.
From the part of the work that we did with
Vertica over the last year with reading streaming data from Kafka into
Spark, of course, and then into Vertica, they said that reading
real-time streaming data from Kafka directly into HPE Vertica will be a
great add-on and they announced it. Ben Vandiver and developers
announced it.
I really want to be in a place, and this
affords us to be in that place, to influence where they are going,
because it benefits all of us and the entire community. It's being able
to give them prescriptive guidance as well from the customer
perspective, because this is what we're doing in the real world, of
course. They want to make us happy, and we will make them happy.
Our
investments have been in things like Kafka streaming and Spark and how
does Spark SQL work with Vertica and VSQL. They don’t necessarily have
to compete. There is a world for both. So coexisting, influencing that,
and having them be receptive to it is amazing. A lot of companies aren’t
very receptive to taking the feedback from us as consumers and baking
that into offerings.
One of the things in our model to
load the data as fast as possible in parallel is that we pre-hash the
data. If you just take user IDs, for instance, and you hash on those
IDs, so that you can put this user on this node, and this one on this
one and this one, is an even distribution of data, that wasn’t exposed
in Vertica. I've been asking for it since the Twitter days for years.
So
we wrote our own version of it. I managed to have the Vertica
developers, which is a rare and a great opportunity, review what we had
done. They said, "Yes, that’s spot on. That’s exactly the
implementation." I said, "You know what would be even better.
I've been
asking for this for years and I know you have lots of other customers.
Why don’t you just make it available for everybody to use. Then, I don’t
have to use mine and everybody else can benefit from it as well. T
hey announced in 2015 that they're going to make it available. So being able
to influence things like that just helped the whole ecosystem.
Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.
You may also be interested in: