AI governance is rapidly evolving — here’s how government agencies must prepare
The
global
AI
governance
landscape
is
complex
and
rapidly
evolving.
Key
themes
and
concerns
are
emerging,
however
government
agencies
should
get
ahead
of
the
game
by
evaluating
their
agency-specific
priorities
and
processes.
Compliance
with
official
policies
through
auditing
tools
and
other
measures
is
merely
the
final
step.
The
groundwork
for
effectively
operationalizing
governance
is
human-centered,
and
includes
securing
funded
mandates,
identifying
accountable
leaders,
developing
agency-wide
AI
literacy
and
centers
of
excellence
and
incorporating
insights
from
academia,
non-profits
and
private
industry.
The
global
governance
landscape
As
of
this
writing,
the
OECD
Policy
Observatory
lists
668
national
AI
governance
initiatives
from
69
countries,
territories
and
the
EU.
These
include
national
strategies,
agendas
and
plans;
AI
coordination
or
monitoring
bodies;
public
consultations
of
stakeholders
or
experts;
and
initiatives
for
the
use
of
AI
in
the
public
sector.
Moreover,
the
OECD
places
legally
enforceable
AI
regulations
and
standards
in
a
separate
category
from
the
initiatives
mentioned
earlier,
in
which
it
lists
an
additional
337
initiatives.
The
term
governance
can
be
hard
to
define.
In
the
context
of
AI,
it
can
refer
to
the
safety
and
ethics
guardrails
of
AI
tools
and
systems,
policies
concerning
data
access
and
model
usage
or
the
government-mandated
regulation
itself.
Therefore,
we
see
national
and
international
guidelines
address
these
overlapping
and
intersecting
definitions
in
a
variety
of
ways.
For
all
these
reasons
AI
governance
should
begin
at
the
level
of
concept
and
continue
throughout
the
lifecycle
of
the
AI
solution.
Common
challenges,
common
themes
Broadly,
government
agencies
strive
for
governance
that
supports
and
balances
societal
concerns
of
economic
prosperity,
national
security
and
political
dynamics,
as
we’ve
seen
in
the
recent
White
House
order
to
establish
AI
governance
boards
in
U.S.
federal
agencies.
Meanwhile,
many
private
companies
seem
to
prioritize
economic
prosperity,
focusing
on
efficiency
and
productivity
that
drives
business
success
and
shareholder
value
and
some
companies
such
as
IBM
emphasize
integrating
guardrails
into
AI
workflows.
Non-governmental
bodies,
academics
and
other
experts
are
also
publishing
guidance
useful
to
public
sector
agencies.
This
year
the
World
Economic
Forum’s
AI
Governance
Alliance
published
the
Presidio
AI
Framework
(PDF).
It
“…provides
a
structured
approach
to
the
safe
development,
deployment
and
use
of
generative
AI.
In
doing
so,
the
framework
highlights
gaps
and
opportunities
in
addressing
safety
concerns,
viewed
from
the
perspective
of
four
primary
actors:
AI
model
creators,
AI
model
adapters,
AI
model
users,
and
AI
application
users.”
Across
industries
and
sectors,
some
common
regulatory
themes
are
emerging.
For
instance,
it
is
increasingly
advisable
to
provide
transparency
to
end
users
about
the
presence
and
use
of
any
AI
they
are
interacting
with.
Leaders
must
ensure
reliability
of
performance
and
resistance
to
attack,
as
well
as
actionable
commitment
to
social
responsibility.
This
includes
prioritizing
fairness
and
lack
of
bias
in
training
data
and
output,
minimizing
environmental
impact,
and
increasing
accountability
through
designation
of
responsible
individuals
and
organization-wide
education.
Policies
are
not
enough
Whether
governance
policies
rely
on
soft
law
or
formal
enforcement,
and
no
matter
how
comprehensively
or
eruditely
they
are
written,
they
are
only
principles.
How
organizations
put
them
into
action
is
what
counts.
For
example,
New
York
City
published
its
own
AI
Action
plan
in
October
2023,
and
formalized
its
AI
principles
in
March
2024.
Though
these
principles
aligned
with
the
themes
above–including
stating
that
AI
tools
“should
be
tested
before
deployment”–the
AI-powered
chatbot
that
the
city
rolled
out
to
answer
questions
about
starting
and
operating
a
business
gave
answers
that
encouraged
users
to
break
the
law.
Where
did
the
implementation
break
down?
Operationalizing
governance
requires
a
human-centered,
accountable,
participatory
approach.
Let’s
look
at
three
key
actions
that
agencies
must
take:
1.
Designate
accountable
leaders
and
fund
their
mandates
Trust
cannot
exist
without
accountability.
To
operationalize
governance
frameworks,
government
agencies
require
accountable
leaders
that
have
funded
mandates
to
do
the
work.
To
cite
just
one
knowledge
gap:
several
senior
technology
leaders
we’ve
spoken
to
have
no
comprehension
of
how
data
can
be
biased.
Data
is
an
artifact
of
human
experience,
prone
to
calcifying
worldviews
and
inequity.
AI
can
be
viewed
as
a
mirror
that
reflects
our
biases
back
to
us.
It
is
imperative
that
we
identify
accountable
leaders
who
understand
this
and
can
be
both
financially
empowered
and
held
responsible
for
ensuring
their
AI
is
ethically
operated
and
aligns
with
the
values
of
the
community
it
serves.
2.
Provide
applied
governance
training
We
observe
many
agencies
holding
AI
“innovation
days”
and
hackathons
aimed
at
improving
operational
efficiencies
(such
as
reducing
costs,
engaging
citizens
or
employees
and
other
KPIs).
We
recommend
that
these
hackathons
be
extended
in
scope
to
address
the
challenges
of
AI
governance,
through
these
steps:
-
Step
1:
Three
months
before
the
pilots
are
presented,
have
a
candidate
governance
leader
host
a
keynote
on
AI
ethics
to
hackathon
participants. -
Step
2:
Have
the
government
agency
that
is
establishing
the
policy
act
as
judge
for
the
event.
Provide
criteria
on
how
pilot
projects
will
be
judged
that
includes
AI
governance
artifacts
(documentation
outputs)
including
factsheets,
audit
reports,
layers-of-effect
analysis
(intended,
unintended,
primary
and
secondary
impacts)
and
functional
and
non-functional
requirements
of
the
model
in
operation. -
Step
3:
For
six
to
eight
weeks
leading
up
to
the
presentation
date,
offer
applied
training
to
the
teams
on
developing
these
artifacts
through
workshops
on
their
specific
use
cases.
Bolster
development
teams
by
inviting
diverse,
multidisciplinary
teams
to
join
them
in
these
workshops
as
they
assess
ethics
and
model
risk. -
Step
4:
On
the
day
of
the
event,
have
each
team
present
their
work
in
a
holistic
way,
demonstrating
how
they
have
assessed
and
would
mitigate
various
risks
associated
with
their
use
cases.
Judges
with
domain
expertise,
regulatory,
and
cybersecurity
backgrounds
should
question
and
evaluate
each
team’s
work.
These
timelines
are
based
on
our
experience
giving
practitioners
applied
training
with
respect
to
very
specific
use
cases.
It
gives
would-be
leaders
a
chance
to
do
the
actual
work
of
governance,
guided
by
a
coach,
while
putting
team
members
in
the
role
of
discerning
governance
judges.
But
hackathons
are
not
enough.
One
cannot
learn
everything
in
three
months.
Agencies
should
invest
in
building
a
culture
of
AI
literacy
education
that
fosters
ongoing
learning,
including
discarding
old
assumptions
when
necessary.
3.
Evaluate
inventory
beyond
algorithmic
impact
assessments
Organizations
that
develop
many
AI
models
often
rely
on
algorithmic
impact
assessment
forms
as
their
primary
mechanism
to
gather
important
metadata
about
their
inventory
and
assess
and
mitigate
the
risks
of
AI
models
before
they
are
deployed.
These
forms
only
survey
AI
model
owners
or
procurers
about
the
purpose
of
the
AI
model,
its
training
data
and
approach,
accountable
parties
and
concerns
for
disparate
impact.
There
are
many
causes
of
concern
about
these
forms
being
used
in
isolation
without
rigorous
education,
communication
and
cultural
considerations.
These
include:
-
Incentives:
Are
individuals
incentivized
or
disincentivized
to
fill
out
these
forms
thoughtfully?
We
find
that
most
are
disincentivized
because
they
have
quotas
to
meet. -
Responsibility
for
risk:
These
forms
can
imply
that
model
owners
will
be
absolved
of
risk
because
they
used
a
certain
technology
or
cloud
host
or
procured
a
model
from
a
third
party. -
Relevant
definitions
of
AI:
Model
owners
may
not
realize
that
what
they
are
procuring
or
deploying
meets
the
definition
of
AI
or
intelligent
automation
as
described
by
a
regulation. -
Ignorance
about
disparate
impact:
By
putting
the
onus
on
a
single
person
to
complete
and
submit
an
algorithmic
assessment
form,
one
could
argue
that
accurate
assessment
of
disparate
impact
is
omitted
by
design.
We
have
seen
concerning
form
inputs
made
by
AI
practitioners
across
geographies
and
across
education
levels,
and
by
those
who
say
that
they
have
read
the
published
policy
and
understand
the
principles.
Such
entries
include
“How
could
my
AI
model
be
unfair
if
I
am
not
gathering
PII?,”
and
“There
are
no
risks
for
disparate
impact
as
I
have
the
best
of
intentions.”
These
point
to
the
urgent
need
for
applied
training,
and
an
organizational
culture
that
consistently
measures
model
behaviors
against
clearly
defined
ethical
guidelines.
Creating
a
culture
of
responsibility
and
collaboration
A
participatory
and
inclusive
culture
is
essential
as
organizations
grapple
with
governing
a
technology
with
such
far-reaching
impact.
As
we
have
discussed
previously,
diversity
is
not
a
political
factor
but
a
mathematical
one.
Multidisciplinary
centers
of
excellence
are
essential
to
help
ensure
that
employees
are
educated
and
responsible
AI
users
who
understand
risks
and
disparate
impact.
Organizations
must
make
governance
integral
to
collaborative
innovation
efforts,
and
stress
that
responsibility
belongs
to
everyone,
not
just
model
owners.
They
must
identify
truly
accountable
leaders
who
bring
a
socio-technical
perspective
to
issues
of
governance
and
who
welcome
new
approaches
to
mitigating
AI
risk
whatever
the
source—governmental,
non-governmental
or
academic.
IBM
Consulting
can
help
organizations
operationalize
responsible
AI
governance
For
more
on
this
topic,
read
a
summary
of
a
recent
IBM
Center
for
Business
in
Government
roundtable
with
government
leaders
and
stakeholders
on
how
responsible
use
of
artificial
intelligence
can
benefit
the
public
by
improving
agency
service
delivery.
Was
this
article
helpful?
YesNo
Comments are closed.