The Institute - September 2021 - 59

found in multiple data sets, finding patterns, and
proposing new courses of action.
" People are getting confused about the meaning
of AI in discussions of technology trends-that
there is some kind of intelligent thought in computers
that is responsible for the progress and which is
competing with humans, " he says. " We don't have
that, but people are talking as if we do. "
Jordan should know the difference, after all.
The IEEE Fellow is one of the world's leading
authorities on machine learning. Jordan helped
transform unsupervised machine learning,
which can find structure in data without
preexisting labels, from a collection of unrelated
algorithms to an intellectually coherent
field, the Engineering and Technology History
Wiki explains. Unsupervised learning plays an
important role in scientific applications where
there is an absence of established theory that
can provide labeled training data.
In 2003 he and his students developed latent
Dirichlet allocation, a probabilistic framework for
learning about the topical structure of documents
and other data collections in an unsupervised
manner, according to the wiki. The technique
lets the computer, not the user, discover patterns
and information on its own from documents.
The framework is one of the most popular topic
modeling methods used to discover hidden themes
and classify documents into categories.
In recent years, Jordan has been on a mission
to help scientists, engineers, and others understand
the full scope of machine learning. He says
that developments in machine learning reflect
the emergence of a new field of engineering.
Moreover, he says, it is the first engineering field
that is humancentric, focused on the interface
between people and technology.
" While the science-fiction discussions about
AI and super intelligence are fun, they are a
distraction, " he says. " There's not been enough
focus on the real problem, which is building
planetary-scale machine learning-based systems
that actually work, deliver value to humans, and
do not amplify inequities. "
Jordan's current projects are based on ideas
from economics and his earlier blending of
computer science and statistics. He argues that
the goal of learning systems is to make decisions,
or to support human decision-making, and
decision-makers rarely operate in isolation.
They interact with other decision-makers,
each of whom might have different needs and
values, and the overall interaction needs to be
informed by economic principles, he says. He is
developing " a research agenda in which agents
learn about their preferences from real-world
experimentation, that they blend exploration
" For the foreseeable future,
computers will not be able to
match humans in their ability
to reason abstractly about
real-world situations. "
and exploitation as they collect data to learn
from, and that market mechanisms can structure
the learning process-providing incentives
for learners to gather certain kinds of data and
make certain kinds of coordinated decisions. The
beneficiary of such research will be real-world
systems that bring producers and consumers
together in learning-based markets that are attentive
to social welfare. "
Clarifying AI
In 2019 Jordan wrote " Artificial Intelligence-The
Revolution Hasn't Happened Yet, " published in
the Harvard Data Science Review. He explains in
the article that the term AI is misunderstood not
only by the public but also by technologists. Back
in the 1950s, when the term was coined, he writes,
people aspired to build computing machines that
possessed human-level intelligence. That aspiration
still exists, he says, but what has happened in
the intervening decades is something different.
Computers have not become intelligent per se,
but they have provided capabilities that augment
human intelligence, he writes. Moreover, they
have excelled at low-level pattern-recognition
capabilities that could be performed in principle
by humans but at great cost.
Despite such developments being referred
to as " AI technology, " he writes, the underlying
systems do not involve high-level reasoning or
thought. The systems do not form the kinds of
semantic representations and inferences that
humans are capable of. They do not formulate
and pursue long-term goals.
" For the foreseeable future, computers will
not be able to match humans in their ability to
reason abstractly about real-world situations, "
he writes. " We will need well-thought-out
interactions of humans and computers to solve
our most pressing problems. "
Building a community
Jordan says he values IEEE particularly for its
investment in building mechanisms whereby
communities can connect with each other
through conferences and other forums.

The Institute - September 2021

Table of Contents for the Digital Edition of The Institute - September 2021

The Institute - September 2021 - 53
The Institute - September 2021 - 54
The Institute - September 2021 - 55
The Institute - September 2021 - 56
The Institute - September 2021 - 57
The Institute - September 2021 - 58
The Institute - September 2021 - 59
The Institute - September 2021 - 60
The Institute - September 2021 - 61
The Institute - September 2021 - 62
The Institute - September 2021 - 63
The Institute - September 2021 - 64
The Institute - September 2021 - 65
The Institute - September 2021 - 66
The Institute - September 2021 - 67
The Institute - September 2021 - 68