The Pellucid Perspective - April 2016 - (Page 2)
In search of meaningful operations
By Stuart Lindsay
he PGA of America has thrown in the towel on PerformanceTrak and it will be missed. Pellucid has used it a a
reference point since its inception and we have actively encouraged our clients and State of the Industry attendees to participate. Despite our efforts and PGA encouragement, reporting
was declining and the shrinking sample size and make-up was
making the data less accurate. We applaud the PGA's reluctance
to put out inaccurate information, but they also could have tried
to fix the problem before they discontinued it - there are a lot
of PGA Professionals who have used that information for both
self-assessment and data to provide performance measurement
information to their Boards and owners.
If we understand correctly, the historical PerformanceTrak
data has been turned over to Golf Datatech and the NGCOA,
so at least the data still exists. It is also our understanding that
the NGCOA is very interested in trying to build a replacement
system/service. There are also a couple of other industry veterans
who are trying to create "benchmarking" platforms.
Virtually all of these discussions start with a reference to the
Smith Travel (STR) benchmarking service in the hotel industry.
First, the basic STAR Report data of Average Daily Rate (ADR)
and Revenue per Available Room (RevPAR) started out being
"free" to any hotel that supplied their data. In addition, their
ADR and RevPAR are then compared to other hotels in their
price range by geographic area. This is similar to the valuable
function Golf Datatech has provided on Rounds since 1999, but
Rounds only tell us how many "room nights" we sold and that is
not nearly as meaningful as the revenue those rounds generated.
The STAR Report input is simple - all a hotel has to do is put
in a room night sold and revenue number and provide monthly
data to get a basic comparion of their own performance. There is
a third number for "room nights available," but that is a relatively
static number unless rooms are being renovated or otherwise not
available. Hotels can upgrade to purchase more detailed benchmarking and other analytic data, such as picking a competitive
set of local properties or measuring against similar "class" properties.
Calculating ADR and RevPar are simple for hotels. A 100
room hotel has 36,500 available room nights. If they sold 29,200
rooms and their revenue is $2,000,000, the ADR is $68.50 and
RevPar is $54.80. The STAR Report then shows them a com-
parison with their own history and benchmarks them against
their selected peer group. The market and competitive benchmarks are available for about $700 per year. There are further
refinements and features that can cost more, but the basic STAR
Report is very comprehensive. The STAR Report has become
so well accepted that many hotel flagship brands require their
operators to provide the data and provide the reports on each
hotel operation. The STAR Report is also the "holy grail" for
commercial real estate lenders and appraisers.
Golf is not quite as simple. Not everybody sets up their Tee
Sheet the same way and weather conditions impact capacity as
well - not just temperature and rain, but different daylight hours
and season length.
Calculating the Weather Adjusted Capacity (WAC) involves
looking at a lot of hourly weather data, both for a current time
period and for a certain number of past periods to create historical comparisons. Creating a 3 year window requires looking
at 8,760 hours in each year or 26,280 for the 3 year window.
You also need "rules" to determine which hours are "playable"
and apply those rules consistently 26,280 times. Then you need
to decide how the Tee Sheet is set up. The only real way to do
that is to make adjustments for starting time (varied by daylight
changes) and the time to stop scheduling starts, again varied by
daylight. Most courses set up their tee sheets at the beginning of
the year and don't adjust for daylight. By definition, this makes
most POS generated utilization reports meaningless or worse.
Many courses run "spring specials" based on what looks like low
utilization - after adjustment for daylight and weather variables,
spring use is actually much higher.
The complex set of calculations above is where golf benchmarking usually hits a roadblock. PerformanceTrak tried "days
open," which sort of works except that a "playable day" in Miami has 8 hours and 40 minutes of playable daylight per annual
season day and Minneapolis has over 11hours. I could go on,
but "days open" isn't very accurate - especially in aggregated data
that golf courses try to bring down to a local level. So first, we
have to find a better solution for calculating individual Weather
Adjusted Capacity for every participating golf course.
Second, a golf course needs to be able to determine the courses
they want to be compared to. Is that done by price, slope rating,
simple proximity or any of some other criteria? Most golf courses
If we understand correctly, the historical PerformanceTrak data has been
turned over to Golf Datatech and the NGCOA, so at least the data still exists.
2 The Pellucid PersPecTive
Table of Contents for the Digital Edition of The Pellucid Perspective - April 2016
The Pellucid Perspective - April 2016
In search of meaningful operations performance measurement
We have a data problem in golf. Let’s fix it
EZLinks Golf expands customer base, technology resources with IBS acquisition
On the scene in Myrtle Beach: Owners lighting candles vs. cursing the darkness
March weather impact: Strong first quarter finish, mixed by geography
Sadly for operators, MLB team not the only “reds” in Cincinnati golf market
Golf ’s use of the President - a whiff or a stiff?
The Pellucid Perspective - April 2016