Where to begin: 3 IBM leaders offer guidance to newly appointed chief AI officers
The
number
of
chief
artificial
intelligence
officers
(CAIOs)
has
almost
tripled
in
the
last
5
years,
according
to
LinkedIn.
Companies
across
industries
are
realizing
the
need
to
integrate
artificial
intelligence
(AI)
into
their
core
strategies
from
the
top
to
avoid
falling
behind.
These
AI
leaders
are
responsible
for
developing
a
blueprint
for
AI
adoption
and
oversight
both
in
companies
and
the
federal
government.
Following
a
recent
executive
order
by
the
Biden
administration
and
a
meteoric
rise
in
AI
adoption
across
sectors,
the
Office
of
Management
and
Budget
(OMB)
released
a
memo
on
how
federal
agencies
can
seize
AI’s
opportunities
while
managing
its
risks.
Many
federal
agencies
are
appointing
CAIOs
to
oversee
AI
use
within
their
domains,
promote
responsible
AI
innovation
and
address
risks
associated
with
AI,
including
generative
AI
(gen
AI),
by
considering
its
impact
on
citizens.
But,
how
will
these
CAIOs
balance
regulatory
measures
and
innovation?
How
will
they
cultivate
trust?
Three
IBM
leaders
offer
their
insights
on
the
significant
opportunities
and
challenges
facing
new
CAIOs
in
their
first
90
days:
1.
“Consider
safety,
inclusivity,
trustworthiness
and
governance
from
the
beginning.”
—Kush
Varshney,
IBM
Fellow
The
first
90
days
as
chief
AI
officer
will
be
intense
and
speed
by,
but
you
should
nevertheless
slow
down
not
take
shortcuts.
Consider
safety,
inclusivity,
trustworthiness,
and
governance
from
the
beginning
rather
than
as
considerations
to
be
tacked
on
to
the
end.
But
do
not
allow
the
caution
and
critical
perspective
of
your
inner
social
change
agent
to
extinguish
the
optimism
of
your
inner
technologist.
Remember
that
just
because
AI
is
here
now,
your
agency
is
not
absolved
of
its
existing
responsibilities
to
the
people.
Consider
the
most
vulnerable
among
us,
when
specifying
the
problem,
understanding
the
data,
and
evaluating
the
solution.
Don’t
be
afraid
to
reframe
fairness
from
simply
divvying
up
limited
resources
in
some
equitable
fashion
to
figuring
out
how
you
can
care
for
the
neediest.
Don’t
be
afraid
to
reframe
accountability
from
simply
conforming
to
regulations
to
stewarding
the
technology.
Don’t
be
afraid
to
reframe
transparency
from
simply
documenting
the
choices
made
after
the
fact
to
seeking
public
input
beforehand.
Just
like
urban
planning,
AI
is
infrastructure.
Choices
made
now
can
affect
generations
into
the
future.
Be
guided
by
the
seventh
generation
principle,
but
do
not
succumb
to
long
term
existential
risk
arguments
at
the
expense
of
clear
and
present
harms.
Keep
an
eye
on
harms
we’ve
encountered
over
several
years
through
traditional
machine
learning
modeling,
and
also
on
new
and
amplified
harms
we’re
seeing
through
pre-trained
foundation
models.
Choose
smaller
models
whose
cost
and
behavior
may
be
governed.
Pilot
and
innovate
with
a
portfolio
of
projects;
reuse
and
harden
solutions
to
common
patterns
that
emerge;
and
only
then
deliver
at
scale
through
a
multi-model
platform
approach.
2.
“Create
trustworthy
AI
development.”
—Christina
Montgomery,
IBM
Vice
President
and
Chief
Privacy
and
Trust
Officer
To
drive
efficiency
and
innovation
and
to
build
trust,
all
CAIOs
should
begin
by
implementing
an
AI
governance
program
to
help
address
the
ethical,
social
and
technical
issues
central
to
creating
trustworthy
AI
development
and
deployment.
In
the
first
90
days,
start
by
conducting
an
organizational
maturity
assessment
of
your
agency’s
baseline.
Review
frameworks
and
assessment
tools
so
you
have
a
clear
indication
of
any
strengths
and
weaknesses
that
will
impact
your
ability
to
implement
AI
tools
and
help
with
associated
risks.
This
process
can
help
you
identify
a
problem
or
opportunity
that
an
AI
solution
can
address.
Beyond
technical
requirements,
you
will
also
need
to
document
and
articulate
agency-wide
ethics
and
values
regarding
the
creation
and
use
of
AI,
which
will
inform
your
decisions
about
risk.
These
guidelines
should
address
issues
such
as
data
privacy,
bias,
transparency,
accountability
and
safety.
IBM
has
developed
trust
and
transparency
principles
and
an
“Ethics
by
Design”
playbook
that
can
help
you
and
your
team
to
operationalize
those
principles.
As
a
part
of
this
process,
establish
accountability
and
oversight
mechanisms
to
ensure
that
the
AI
system
is
used
responsibly
and
ethically.
This
includes
establishing
clear
lines
of
accountability
and
oversight,
as
well
as
monitoring
and
auditing
processes
to
ensure
compliance
with
ethical
guidelines.
Next,
you
should
begin
to
adapt
your
agency’s
existing
governance
structures
to
support
AI.
Quality
AI
requires
quality
data.
Many
of
your
existing
programs
and
practices
—
such
as
third-party
risk
management,
procurement,
enterprise
architecture,
legal,
privacy,
and
information
security
—
will
already
overlap
to
create
efficiency
and
leverage
the
full
power
of
your
agency
teams.
The
December
1,
2024
deadline
to
incorporate
the
minimum
risk
management
practices
to
safety-impacting
and
rights-impacting
AI,
or
else
stop
using
the
AI
until
compliance
is
achieved,
will
come
around
quicker
than
you
think.
In
your
first
90
days
on
the
job,
take
advantage
of
automated
tools
to
streamline
the
process
and
turn
to
trusted
partners,
like
IBM,
to
help
implement
the
strategies
you’ll
need
to
create
responsible
AI
solutions.
3.
“Establish
an
enterprise-wide
approach.”
—Terry
Halvorsen,
IBM
Vice
President,
Federal
Client
Development
For
over
a
decade,
IBM
has
been
working
with
U.S.
federal
agencies
to
help
them
develop
AI.
The
technology
has
enabled
important
advancements
for
many
federal
agencies
in
operational
efficiency,
productivity
and
decision
making.
For
example,
AI
has
helped
the
Internal
Revenue
Service
(IRS)
speed
up
the
processing
of
paper
tax
returns
(and
the
delivery
of
tax
refunds
to
citizens),
the
Department
of
Veterans
Affairs
(VA)
decrease
the
time
it
takes
to
process
veteran’s
claims,
and
the
Navy’s
Fleet
Forces
Command
better
plan
and
balance
food
supplies
while
also
reducing
related
supply
chain
risks.
IBM
has
also
long
acknowledged
the
potential
risks
of
AI
adoption,
and
advocated
for
strong
governance
and
for
AI
that
is
transparent,
explainable,
robust,
fair,
and
secure.
To
help
mitigate
risks,
simplify
implementation,
and
take
advantage
of
opportunity,
all
newly
appointed
CAIOs
should
establish
an
enterprise-wide
approach
to
data
and
a
governance
framework
for
AI
adoption.
Data
accessibility,
data
volume,
and
data
complexity
are
all
areas
that
must
be
understood
and
addressed.
‘Enterprise-wide’
suggests
that
the
development
and
deployment
of
AI
and
data
governance
be
brought
out
of
traditional
agency
organizational
silos.
Involve
stakeholders
from
across
your
agency,
as
well
as
any
industry
partners.
Measure
your
results
and
learn
as
you
go
–
both
from
your
agency’s
efforts
and
those
of
your
peers
across
government.
And
finally,
the
old
adage
‘begin
with
the
end
in
mind’
is
as
true
today
as
ever.
IBM
recommends
that
CAIOs
encourage
following
a
use-case
driven
approach
to
AI
–
which
means
identifying
the
targeted
outcomes
and
experiences
you
hope
to
create
and
backing
the
specific
AI
technologies
you’ll
use
(generative
AI,
traditional
AI,
etc.)
from
there.
CAIOs
leading
by
example
Public
leadership
can
set
the
tone
for
AI
adoption
across
all
sectors.
The
creation
of
the
CAIO
position
plays
a
critical
role
in
the
future
of
AI,
allowing
our
government
to
model
a
responsible
approach
to
AI
adoption
across
business,
government
and
industry.
IBM
has
developed
tools
and
strategies
to
help
agencies
adopt
AI
efficiently
and
responsibly
in
various
environments.
We’re
ready
to
support
these
new
CAIOs
as
they
begin
to
build
ethical
and
responsible
AI
implementations
within
their
agencies.
Are
you
wondering
what
to
prioritize
in
your
AI
journey?
Request
an
AI
strategy
briefing
with
IBM
Was
this
article
helpful?
YesNo
Comments are closed.