Enterprises’ best bet for the future: Securing generative AI  

IBM
and
AWS
study:
Less
than
25%
of
current
generative
AI
projects
are
being
secured 

The
enterprise
world
has
long
operated
on
the
notion
that
trust
is
the
currency
of
good
business.
But
as
AI
transforms
and
redefines
how
businesses
operate
and
how
customers
interact
with
them,
trust
in
technology
must
be
built.  

Advances
in
AI
can
free
human
capital
to
focus
on
high-value
deliverables.
This
evolution
is
bound
to
have
a
transformative
impact
on
business
growth,
but
user
and
customer
experiences
hinge
on
organizations’
commitment
to
building
secured,
responsible,
and
trustworthy
technology
solutions.  

Businesses
must
determine
whether
the
generative
AI
interfacing
with
users
is
trusted,
and
security
is
a
fundamental
component
of
trust.
So,
herein
lies
the
one
of
the
biggest
bets
that
enterprises
are
up
against:
securing
their
AI
deployments. 

Innovate
now,
secure
later:
A
disconnect 

Today,
the
IBM®
Institute
for
Business
Value
released
the

Securing
generative
AI:
What
matters
now

study,
co-authored
by
IBM
and
AWS,
introducing
new
data,
practices,
and
recommendations
on
securing
generative
AI
deployments.
According
to
the
IBM
study,
82%
of
C-suite
respondents
stated
that
secure
and
trustworthy
AI
is
essential
to
the
success
of
their
businesses.
While
this
sounds
promising,
69%
of
leaders
surveyed
also
indicated
that
when
it
comes
to
generative
AI,
innovation
takes
precedence
over
security. 

Prioritizing
between
innovation
and
security
may
seem
like
a
choice,
but
in
fact,
it’s
a
test.
There’s
a
clear
tension
here;
organizations
recognize
that
the
stakes
are
higher
than
ever
with
generative
AI,
but
they
aren’t
applying
their

lessons
that
are
learned

from
previous
tech
disruptions.
Like
the
transition
to
hybrid
cloud,
agile
software
development,
or
zero
trust,
generative
AI
security
can
be
an
afterthought.
More
than
50%
of
respondents
are
concerned
about
unpredictable
risks
impacting
generative
AI
initiatives
and
fear
they
will
create
increased
potential
for
business
disruption.
Yet
they
report
only
24%
of
current
generative
AI
projects
are
being
secured.
Why
is
there
such
a
disconnect? 

Security
indecision
may
be
both
an
indicator
and
a
result
of
a
broader
generative
AI
knowledge
gap.
Nearly
half
of
respondents
(47%)
said
that
they
are
uncertain
about
where
and
how
much
to
invest
when
it
comes
to
generative
AI.
Even
as
teams
pilot
new
capabilities,
leaders
are
still
working
through
which
generative
AI
use
cases
make
the
most
sense
and
how
they
scale
them
for
their
production
environments. 

Securing
generative
AI
starts
with
governance 

Not
knowing
where
to
start
might
be
the
inhibitor
for
security
action
too.
Which
is
why
IBM
and
AWS
joined
efforts
to
illuminate
an
action
guide
and
practical
recommendations
for
organizations
seeking
to
protect
their
AI. 

To
establish
trust
and
security
in
their
generative
AI,
organizations
must
start
with
the
basics,
with
governance
as
a
baseline.
In
fact,
81%
of
respondents
indicated
that
generative
AI
requires
a
fundamentally
new
security
governance
model.
By
starting
with
governance,
risk,
and
compliance
(GRC),
leaders
can
build
the
foundation
for
a
cybersecurity
strategy
to
protect
their
AI
architecture
that
is
aligned
to
business
objectives
and
brand
values. 

For
any
process
to
be
secured,
you
must
first
understand
how
it
should
function
and
what
the
expected
process
should
look
like
so
that
deviations
can
be
identified.
AI
that
strays
from
what
it
was
operationally
designed
to
do
can
introduce
new
risks
with
unforeseen
business
impacts.
So,
identifying
and
understanding
those
potential
risks
helps
organizations
understand
their
own
risk
threshold,
informed
by
their
unique
compliance
and
regulatory
requirements. 

Once
governance
guardrails
are
set,
organizations
are
able
to
more
effectively
establish
a
strategy
for

securing
the
AI
pipeline.

The
data,
the
models,
and
their
use—as
well
as
the
underlying
infrastructure
they’re
building
and
embedding
their
AI
innovations
into.
While
the
shared
responsibility
model
for
security
may
change
depending
on
how
the
organization
uses
generative
AI.
Many
tools,
controls,
and
processes
are
available
to
help
mitigate
the
risk
of
business
impact
as
organizations
develop
their
own
AI
operations. 

Organizations
also
need
to
recognize
that
while
hallucinations,
ethics,
and
bias
often
come
to
mind
first
when
thinking
of
trusted
AI,
the
AI
pipeline
faces
a
threat
landscape
that
puts

trust
itself
at
risk
.
Conventional
threats
take
on
a
new
meaning,
new
threats
use
offensive
AI
capabilities
as
a
new
attack
vector,
and
new
threats
seek
to
compromise
the
AI
assets
and
services
we
increasingly
rely
upon. 

The
trust—security
equation 

Security
can
help
bring
trust
and
confidence
into
generative
AI
use
cases.
To
accomplish
this
synergy,
organizations
must
take
a

village
approach
.
The
conversation
must
go
beyond
IS
and
IT
stakeholders
to
strategy,
product
development,
risk,
supply
chain,
and
customer
engagement. 

Because
these
technologies
are
both
transformative
and
disruptive,
managing
the
organization’s
AI
and
generative
AI
estates
requires
collaboration
across
security,
technology,
and
business
domains. 

A
technology
partner
can
play
a
key
role.
Using
the
breadth
and
depth
of
technology
partners’
expertise
across
the
threat
lifecycle
and
across
the
security
ecosystem
can
be
an
invaluable
asset.
In
fact,
the
IBM
study
revealed
that
over
90%
of
surveyed
organizations
are
enabled
via
a
third-party
product
or
technology
partner
for
their
generative
AI
security
solutions.
When
it
comes
to
selecting
a
technology
partner
for
their
generative
AI
security
needs,
surveyed
organizations
reported
the
following: 

  • 76%
    seek
    a
    partner
    to
    help
    build
    a
    compelling
    cost
    case
    with
    solid
    ROI.  
  • 58%
    seek
    guidance
    on
    an
    overall
    strategy
    and
    roadmap. 
  • 76%
    seek
    partners
    that
    can
    facilitate
    training,
    knowledge
    sharing,
    and
    knowledge
    transfer. 
  • 75%
    choose
    partners
    that
    can
    guide
    them
    across
    the
    evolving
    legal
    and
    regulatory
    compliance
    landscape. 

The
study
makes
it
clear
that
organizations
recognize
the
importance
of
security
for
their
AI
innovations,
but
they
are
still
trying
to
understand
how
best
to
approach
the
AI
revolution.
Building
relationships
that
can
help
guide,
counsel
and
technically
support
these
efforts
is
a
crucial
next
step
in
protected
and
trusted
generative
AI.
In
addition
to
sharing
key
insights
on
executive
perceptions
and
priorities,
IBM
and
AWS
have
included
an
action
guide
with
practical
recommendations
for
taking
your
generative
AI
security
strategy
to
the
next
level. 

Learn
more
about
the
joint
IBM-AWS
study
and
how
organizations
can
protect
their
AI
pipeline

Was
this
article
helpful?


Yes
No

Comments are closed.