How the Masters uses watsonx to manage its AI lifecycle

At
the
Masters®,
storied
tradition
meets
state-of-the-art
technology.
Through
a
partnership
spanning
more
than
25
years,
IBM
has
helped
the
Augusta
National
Golf
Club
capture,
analyze,
distribute
and
use
data
to
bring
fans
closer
to
the
action,
culminating
in
the
AI-powered
Masters
digital
experience
and
mobile
app.
Now,
whether
they’re
lining
the
fairways
or
watching
from
home,
fans
can
more
fully
appreciate
the
performance
of
the
world’s
best
golfers
at
the
sport’s
most
prestigious
tournament.

In
a
continuous
design
thinking
process,
teams
from
IBM
Consulting
and
the
club
collaborate
to
improve
the
fan
experience
year
after
year.

New
features

in
2024
include
Hole
Insights,
stats
and
projections
about
every
shot,
from
every
player
on
every
hole;
and
expanded
AI-generated
narration
(including
Spanish
language)
on
more
than
20,000
highlight
clips.

The
Masters
has
long
relied
on
IBM
to
manage
its
data,
applications
and
workloads
across
on-premises
servers
and
multiple
clouds,
but
this
year
marks
an
important
evolution:
the
entire
AI
lifecycle
is
being
managed
on
the
AI
and
data
platform

IBM®
watsonx™
.

Collecting
data

The
IBM
watsonx
platform
includes

watsonx.data
,
a
fit-for-purpose
data
store
built
on
an

open
lakehouse
architecture
.
This
allows
the
Masters
to
scale
analytics
and
AI
wherever
their
data
resides,
through
open
formats
and
integration
with
existing
databases
and
tools.

“The
data
lake
at
the
Masters
draws
on
eight
years
of
data
that
reflects
how
the
course
has
changed
over
time,
while
using
only
the
shot
data
captured
with
our
current
ball-tracking
technology,”
says
Aaron
Baughman,
IBM
Fellow
and
AI
and
Hybrid
Cloud
Lead
at
IBM.
“Hole
distances
and
pin
positions
vary
from
round
to
round
and
year
to
year;
these
factors
are
important
as
we
stage
the
data.”

The
historical
sources
watsonx.data
accesses
comprise
relational,
object
and
document
databases,
including

IBM®
Db2®
,

IBM®
Cloudant
,

IBM
Cloud®
Object
Storage

and

PostgreSQL
.

Lastly,
watsonx.data
pulls
from
live
feeds.
“We’ll
hit
a
variety
of
feeds
from
the
system,
including
scoring,
ball
tracking,
pin
location,
player
pairings
and
scheduling,”
says
Baughman.
“We
also
pull
in
video,
which
is
where
we
add
the
commentary
and
embed
it
into
the
clips.”

Watsonx.data
lets
organizations
optimize
workloads
for
different
uses.
For
the
Masters,
“Consumer-facing
data
access
is
fronted
by
a

CDN
that
caches

resources
so
the
traffic
doesn’t
hit
our
origin
servers,
whereas
our
AI
workflow
calls
on
data
directly
from
the
origin
to
ensure
it’s
as
up
to
date
as
possible,”
says
Baughman.

Preparing
and
annotating
data

IBM
watsonx.data
helps
organizations
put
their
data
to
work,
curating
and
preparing
data
for
use
in

AI
models

and
applications.
The
Masters
uses
watsonx.data
to
organize
and
structure
data
relating
to
the
tournament—course,
round
and
holes—which
can
then
be
populated
with
live
data
as
the
tournament
progresses.
“We
also
have
player
elements,
ball
tracking
information
and
scoring,”
says
Baughman.
“Being
able
to
organize
the
data
around
that
structure
helps
us
to
efficiently
query,
retrieve
and
use
the
information
downstream,
for
example
for
AI
narration.”

Watsonx.data
uses

machine
learning

(ML)
applications
to
simulate
data
that
represents
ball
positioning
projections.
“With
the
data
we’ve
prepared
we
can
then
calculate
the
odds
of
a
birdie
or
an
eagle
from
a
particular
sector;
we
can
also
look
across
to
the
opposite
side
of
the
fairway
for
contrastive
statistics,”
says
Baughman.

Developing
and
evaluating
AI
models

The

IBM®
watsonx.ai

component
of
watsonx
lets
enterprise
users
build
AI
applications
faster
and
with
less
data,
whether
they’re
using

generative
AI

or
traditional
ML.

“For
the
Masters
we
use
290
traditional
AI
models
to
project
where
golf
balls
will
land,”
says
Baughman.
“When
a
ball
passes
one
of
the
predefined
distance
thresholds
for
a
hole,
it
shifts
to
the
next
model,
eventually
ending
up
on
the
green.
In
addition,
there
are
four
possible
pin
locations—front
left,
front
right,
back
left
or
back
right—for
a
total
of
about
16
models
per
hole.
It
would
be
a
huge
challenge
for
a
human
to
manage
these
models,
so
we
use
the

autoAI

feature
of
watsonx
to
help
us
build
the
right
model
and
pick
the
best
projection.”

Watsonx.ai
also
helped
the
digital
team
build
a
generative
AI
model
for
text
creation,
as
the
basis
for
spoken
commentary.
This
makes
it
possible
to
then
use
watsonx.governance
to

evaluate
the
quality

of
the
output,
using
metrics
such
as
ROUGE,
METEOR
and
perplexity
scores
while
using
HAP
guardrails
to
eliminate
any
hate,
abuse
or
profanity
content.

“The
tools
in
watsonx.governance
really
help,”
says
Baughman.
“We
can
keep
track
of
the
model
version
we
use,
promote
it
to
validation,
and
eventually
deploy
it
to
production
once
we
feel
confident
that
all
the
metrics
are
passing
our
quality
estimates.
We
also
measure
response
time
since
this
is
a
near
real-time
system.
Watsonx.governance
makes
it
easy
to
manage
and
deploy
all
these
models
effectively.”

Training
and
testing
models

The
Masters
digital
team
used
watsonx.ai
to
automate
the
creation
of
ML
models
used
in
Hole
Insights,
based
on
8
years
of
data.
For
AI
narration,
they
used
a
pretrained

large
language
model

(LLM)
with
billions
of
parameters.

“We
used
few-shot
learning
to
help
guide
the
models,”
says
Baughman.
“Rather
than
fine
tuning
the
models
through
the
tournament,
we
fine
modify
the
input
statistics
that
go
into
the
models.
It’s
a
compromise
that
delivers
the
results
we
need
while
minimizing
risk.”

Watsonx.governance
also
provides
multiple
LLMs
used
to
validate
the
data
of
the
main
model,
for
example
to
eliminate
HAP
content.
“We
have
a
lot
of
guardrails,
right
down
to
regular
expressions,”
says
Baughman.
“Watsonx
gave
us
confidence
that
we
could
identify
and
mitigate
HAP
content
in
real
time,
before
it
gets
published.”

Deploying
and
managing
models

After
tuning
and
testing
ML
or
generative
AI
models,
watsonx.ai
provides
a
variety
of
ways
to
deploy
them
to
production
and
evaluate
models
within
the
deployment
space.
Models
can
also
be
evaluated
for
fairness,
quality
and
drift.

“We
used
Python
scripts
in
watsonx
to
deploy
the
ML
models
on
Watson
Machine
Learning
[a
set
of
Machine
Learning
REST
APIs
running
on
IBM
Cloud],”
says
Baughman.
“We
also
run
the
models
locally,
since
we
have
containers
that
load
the
models
in
memory,
so
there’s
no
network
latency
at
all.
We
have
both
strategies—we
typically
run
the
ones
in
memory
first,
then
if
anything
goes
wrong,
we
use
the
models
deployed
on
Watson
Machine
Learning.”

The
team
took
a
different
approach
to
deploy
the
LLM
used
for
AI
narration,
by
using
a
deployed
model
within
watsonx.ai
(where
its
generative
parameters
can
be
managed)
and
secondly,
using
a
model
that
was
deployed
to
Watson
Machine
Learning
through
watsonx.governance.

Governing
and
maintaining
models

Watsonx.governance
provides
automated
monitoring
of
deployed
ML
and
generative
AI
models
and
facilitates
transparent,
explainable
results.
Users
can
establish
risk
tolerances
and
set
alerts
around
a
wide
variety
of
metrics.

“Watsonx.governance
alerts
us
if
the
models
fail
on
any
dimension,
and
allows
us
to
easily
fix
them,”
says
Baughman.
“We
can
also
run
experiments
on
demand,
create
AI
use
cases
and
ensure
they
work
as
expected.”
One
such
experiment:
after
a
round
ends,
the
teams
have
some
ground
truth
for
that
round
that
can
be
added
into
the
model
and
revalidated,
enabling
continual
improvement
and
improved
results.

The
88th
Masters
Tournament
will
be
played
from
April
11
to
14
at
Augusta
National
Golf
Club
in
Augusta,
GA.
To
see
IBM
technology
in
action,
visit
Masters.com
or
the
Masters
app
on
your
mobile
device,
available
on
the
Apple
App
Store
and
Google
Play
Store.

Discover
how
watsonx
can
help
you
manage
the
entire
AI
lifecycle

Was
this
article
helpful?


Yes
No

Comments are closed.