Accelerating scope 3 emissions accounting: LLMs to the rescue

The
rising
interest
in
the
calculation
and
disclosure
of
Scope
3
GHG
emissions
has
thrown
the
spotlight
on
emissions
calculation
methods.
One
of
the
more
common
Scope
3
calculation
methodologies
that
organizations
use
is
the
spend-based
method,
which
can
be
time-consuming
and
resource
intensive
to
implement.
This
article
explores
an
innovative
way
to
streamline
the
estimation
of
Scope
3
GHG
emissions
leveraging
AI
and
Large
Language
Models
(LLMs)
to
help
categorize
financial
transaction
data
to
align
with
spend-based
emissions
factors.

Why
are
Scope
3
emissions
difficult
to
calculate?


Scope
3
emissions
,
also
called
indirect
emissions,
encompass
greenhouse
gas
emissions
(GHG)
that
occur
in
an
organization’s
value
chain
and
as
such,
are
not
under
its
direct
operational
control
or
ownership.
In
simpler
terms,
these
emissions
arise
from
external
sources,
such
as
emissions
associated
with
suppliers
and
customers
and
are
beyond
the
company’s
core
operations.

A
2022
CDP

study

found
that
for
companies
that
report
to
CDP,
emissions
occurring
in
their
supply
chain
represent
an
average
of
11.4x
more
emissions
than
their
operational
emissions.

The
same
study
showed
that
72%
of
CDP-responding
companies
reported
only
their
operational
emissions
(Scope
1
and/or
2).
Some
companies
attempt
to
estimate
Scope
3
emissions
by
collecting
data
from
suppliers
and
manually
categorizing
data,
but
progress
is
hindered
by
challenges
such
as
large
supplier
base,
depth
of
supply
chains,
complex
data
collection
processes
and
substantial
resource
requirements.

Using
LLMs
for
Scope
3
emissions
estimation
to
speed
time
to
insight

One
approach
to
estimating
Scope
3
emissions
is
to
leverage
financial
transaction
data
(for
example,
spend)
as
a
proxy
for
emissions
associated
with
goods
and/or
services
purchased.
Converting
this
financial
data
into
GHG
emissions
inventory
requires
information
on
the
GHG
emissions
impact
of
the
product
or
service
purchased.

The

US
Environmentally-Extended
Input-Output
(USEEIO)

is
a
lifecycle
assessment
(LCA)
framework
that
traces
economic
and
environmental
flows
of
goods
and
services
within
the
United
States.
USEEIO
offers
a
comprehensive
dataset
and
methodology
that
merges
economic
IO
analysis
with
environmental
data
to
estimate
the
environmental
consequences
associated
with
economic
activities.
Within
USEEIO,
goods
and
services
are
categorized
into
66
spend
categories,
referred
to
as
commodity
classes,
based
on
their
common
environmental
characteristics.
These
commodity
classes
are
associated
with
emission
factors
used
to
estimate
environmental
impacts
using
expenditure
data.

The

Eora
MRIO
(Multi-region
input-output)

dataset
is
a
globally
recognized
spend-based
emission
factor
set
that
documents
the
inter-sectoral
transfers
amongst
15.909
sectors
across
190
countries.
The
Eora
factor
set
has
been
modified
to
align
with
the
USEEIO
categorization
of
66
summary
classifications
per
country.
This
involves
mapping
the
15.909
sectors
found
across
the
Eora26
categories
and
more
detailed
national
sector
classifications
to
the
USEEIO
66
spend
categories.

However,
while
spend-based
commodity-class
level
data
presents
an
opportunity
to
help
address
the
difficulties
associates
with
Scope
3
emissions
accounting,
manually
mapping
high
volumes
of
financial
ledger
entries
to
commodity
classes
is
an
exceptionally
time-consuming,
error-prone
process.

This
is
where
LLMs
come
into
play.
In
recent
years,
remarkable
strides
have
been
achieved
in
crafting
extensive
foundation
language
models
for
natural
language
processing
(NLP).
These
innovations
have
showcased
strong
performance
in
comparison
to
conventional
machine
learning
(ML)
models,
particularly
in
scenarios
where
labelled
data
is
in
short
supply.
Capitalizing
on
the
capabilities
of
these
large
pre-trained
NLP
models,
combined
with
domain
adaptation
techniques
that
make
efficient
use
of
limited
data,
presents
significant
potential
for
tackling
the
challenge
associated
with
accounting
for
Scope
3
environmental
impact.

Our
approach
involves
fine-tuning

foundation
models

to
recognize
Environmentally-Extended
Input-Output
(EEIO) 
commodity
classes
of
purchase
orders
or
ledger
entries
which
are
written
in
natural
language.
Subsequently,
we
calculate
emissions
associated
with
the
spend
using
EEIO
emission
factors
(emissions
per
$
spent)
sourced
from

Supply
Chain
GHG
Emission
Factors
for
US
Commodities
and
Industries

for
US-centric
datasets,
and
the

Eora
MRIO
(Multi-region
input-output)

for
global
datasets.
This
framework
helps
streamline
and
simplify
the
process
for
businesses
to
calculate
Scope
3
emissions.

Figure
1
illustrates
the
framework
for
Scope
3
emission
estimation
employing
a
large
language
model.
This
framework
comprises
four
distinct
modules:
data
preparation,
domain
adaptation,
classification
and
emission
computation.


Figure
1:
Framework
for
estimating
Scope3
emissions
using
large
language
models

We
conducted
extensive
experiments
involving
several
cutting-edge
LLMs
including
roberta-base,
bert-base-uncased,
and
distilroberta-base-climate-f.
Additionally,
we
explored
non-foundation
classical
models
based
on
TF-IDF
and
Word2Vec
vectorization
approaches.
Our
objective
was
to
assess
the
potential
of
foundation
models
(FM)
in
estimating
Scope
3
emissions
using
financial
transaction
records
as
a
proxy
for
goods
and
services.
The
experimental
results
indicate
that
fine-tuned
LLMs
exhibit
significant
improvements
over
the
zero-shot
classification
approach.
Furthermore,
they
outperformed
classical
text
mining
techniques
like
TF-IDF
and
Word2Vec,
delivering
performance
on
par
with
domain-expert
classification.


Figure
2:
Compared
results
of
different
approaches

Incorporating
AI
into
IBM
Envizi
ESG
Suite
to
calculate
Scope
3
emissions

Employing
LLMs
in
the
process
of
estimating
Scope
3
emissions
is
a
promising
new
approach.

We
embraced
this
approach
and
embedded
it
into
IBM®
Envizi™
ESG
Suite
in
the
form
of
an
AI-driven
feature
that
uses
a
NLP
engine
to
help
identify
the
commodity
category
from
spend
transaction
descriptions.

As
previously
explained,
spend
data
is
more
readily
available
in
an
organization
and
is
a
common
proxy
of
quantity
of
goods/services.
However,
challenges
such
as
commodity
recognition
and
mapping
can
seem
hard
to
address.
Why?

  • Firstly,
    because
    purchased
    products
    and
    services
    are
    described
    in
    natural
    languages
    in
    various
    forms,
    which
    is
    why
    commodity
    recognition
    from
    purchase
    orders/ledger
    entry
    is
    extremely
    hard.
  • Secondly,
    because
    there
    are
    millions
    of
    products
    and
    service
    for
    which
    spend
    based
    emission
    factor
    may
    not
    be
    available.
    This
    makes
    the
    manual
    mapping
    of
    the
    commodity/service
    to
    product/service
    category
    extremely
    hard,
    if
    not
    impossible.

Here’s
where
deep
learning-based
foundation
models
for
NLP
can
be
efficient
across
a
broad
range
of
NLP
classification
tasks
when
availability
of
labelled
data
is
insufficient
or
limited.
Leveraging
large
pre-trained
NLP
models
with
domain
adaptation
with
limited
data
has
potential
to
support
Scope
3
emissions
calculation.

Wrapping
Up

In
conclusion,
calculating
Scope
3
emissions
with
the
support
of
LLMs
represents
a
significant
advancement
in
data
management
for
sustainability.
The
promising
outcomes
from
employing
advanced
LLMs
highlight
their
potential
to
accelerate
GHG
footprint
assessments.
Practical
integration
into
software
like
the
IBM
Envizi
ESG
Suite
can
simplify
the
process
while
increasing
the
speed
to
insight.

See
AI
Assist
in
action
within
the
IBM
Envizi
ESG
Suite

Was
this
article
helpful?


Yes
No

Comments are closed.