Automotive Engineering - February 2024 - 21

Tuning-up AI's 'understanding'
SENSOR TECHNOLOGY FEATURE
to make safer ADAS, AVs
Kognic's software platform
helps developers better
understand and manage
sensor-fusion data.
Kognic's
advanced
interpretation
of sensor data
helps artificial
intelligence
and machine
learning
recognize the
human thing
to do.
by Bill Visnic
I
n December 2023, Kognic, the Gothenburg, Sweden-based developer
of a software platform to analyze and optimize the massively
complex datasets behind ADAS and automated-driving systems, was
in Dearborn, Michigan to accept the Tech.AD USA award for Sensor
Perception solution of the year. The company doesn't make sensors, but
one might say it makes sense of the data that comes from sensors.
Kognic, established in 2018, is well-known in the ADAS/AV software
sector for its work to help developers extract better performance from
and enhance the robustness of safety-critical " ground-truth " information
gleaned from petabytes-upon-petabytes of sensor-fusion datasets.
Kognic CEO and co-founder Daniel Langkilde espoused a path for
improving artificial intelligence-reliant systems based on " programming
with data instead of programming with code. "
There's a broad term that is probably unfamiliar to anyone not intimate
with software or artificial-intelligence development: AI alignment.
It's the part of AI safety research attempting to assure that AI systems
are aligned with human intent and values. Langkilde asserted in a
Forbes article in fall 2023 that, " In its capacity to power self-driving
cars, AI has not lived up to consumer expectations. The problem is not
one of intent. The problem is there is no single way to drive. "
SAE Media spoke with Langkilde about what AI alignment really
means and what will be required from software to improve the safety
and performance of ADAS and high-level driving automation.
What exactly is AI alignment - and is it a new thing?
Langkilde: It's a rather new term. It's a new concept for most people.
In the AI community, I guess people interchangeably use 'AI
safety' and 'AI alignment.' There are a few different interpretations, I
suppose, because it's an emerging field, but typically AI alignment
is about ensuring that the behavior of an AI system is consistent
with either the preferences or goals of humans.
So as these systems become more capable, it becomes more
opaque what is in fact the 'policy' by which the system operates, and
that increases the importance of being able to probe that and ensure
that it actually does what you intended to do. I'm an engineer. I'd
love to think the world is well-behaved and easily quantified and so
AUTOMOTIVE ENGINEERING
forth. You realize that it's not. It's ambiguous and it's
subject to a lot of interpretation.
[Driving] still is a very subjective task. There are
many types of negotiations between drivers and judgment
calls about the viability of a path, the intention
of other objects and so forth. So I think under ideal
circumstances, self-driving vehicles are actually already
here. I mean, Waymo works really well.
As 'outsiders,' if we look at what you're trying to do
now with AI alignment, is it teaching machine learning
how to learn?
Langkilde: It needs to understand human preference. I
guess I prefer to be very precise here. Typically, with
today's machine learning, you put together a dataset,
you select a type of neural network or something, and
you train that on the entire dataset. It takes days or
weeks. It uses huge amounts of GPU.
For most self-driving-car companies, and certainly
almost all ADAS products, scenario interpretation is
typically not based on machine learning, or at least
not neural networks. It's actually because the tech
stack is typically divided into three pieces. You have
first a perception system that you train to understand
the world around you. That is 100% machine learning
today, pretty much, because that's where modern
deep learning turns out to be very powerful - to understand
the camera images and lidar point clouds
and radar reflections, deep learning works really well.
So you put together a large dataset, you label it very
carefully, and then you train a machine learning model
using supervised learning. That's step number one. Step
number two is that you try to predict where everything
will be going; your perception system gives you a snapshot
of the world, and then you try to immediately
make a prediction of where everything is going to go.
February 2024 21
KOGNIC

Automotive Engineering - February 2024

Table of Contents for the Digital Edition of Automotive Engineering - February 2024

Automotive Engineering - February 2024 - INTRO1
Automotive Engineering - February 2024 - SPONSOR1
Automotive Engineering - February 2024 - CVR1
Automotive Engineering - February 2024 - CVR2
Automotive Engineering - February 2024 - 1
Automotive Engineering - February 2024 - 2
Automotive Engineering - February 2024 - 3
Automotive Engineering - February 2024 - 4
Automotive Engineering - February 2024 - 5
Automotive Engineering - February 2024 - 6
Automotive Engineering - February 2024 - 7
Automotive Engineering - February 2024 - 8
Automotive Engineering - February 2024 - 9
Automotive Engineering - February 2024 - 10
Automotive Engineering - February 2024 - 11
Automotive Engineering - February 2024 - 12
Automotive Engineering - February 2024 - 13
Automotive Engineering - February 2024 - 14
Automotive Engineering - February 2024 - 15
Automotive Engineering - February 2024 - 16
Automotive Engineering - February 2024 - 17
Automotive Engineering - February 2024 - 18
Automotive Engineering - February 2024 - 19
Automotive Engineering - February 2024 - 20
Automotive Engineering - February 2024 - 21
Automotive Engineering - February 2024 - 22
Automotive Engineering - February 2024 - 23
Automotive Engineering - February 2024 - 24
Automotive Engineering - February 2024 - 25
Automotive Engineering - February 2024 - 26
Automotive Engineering - February 2024 - 27
Automotive Engineering - February 2024 - 28
Automotive Engineering - February 2024 - CVR3
Automotive Engineering - February 2024 - CVR4
https://www.nxtbook.com/smg/sae/24AE05
https://www.nxtbook.com/smg/sae/24AE04
https://www.nxtbook.com/smg/sae/24AE03
https://www.nxtbook.com/smg/sae/24AE02
https://www.nxtbook.com/smg/sae/23AE12
https://www.nxtbook.com/smg/sae/23AAVE11
https://www.nxtbook.com/smg/sae/23AE10
https://www.nxtbook.com/smg/sae/23AE09
https://www.nxtbook.com/smg/sae/23AE08
https://www.nxtbook.com/smg/sae/23AAVE07
https://www.nxtbook.com/smg/sae/23AE06
https://www.nxtbook.com/smg/sae/23AE05
https://www.nxtbook.com/smg/sae/23AAVE04
https://www.nxtbook.com/smg/sae/23AE04
https://www.nxtbook.com/smg/sae/23AE03
https://www.nxtbook.com/smg/sae/23AE02
https://www.nxtbook.com/smg/sae/23AAVE01
https://www.nxtbook.com/smg/sae/22AE12
https://www.nxtbook.com/smg/sae/22AVE11
https://www.nxtbook.com/smg/sae/22AE10
https://www.nxtbook.com/smg/sae/22AE09
https://www.nxtbook.com/smg/sae/22AE08
https://www.nxtbook.com/smg/sae/22AVE07
https://www.nxtbook.com/smg/sae/22AE06
https://www.nxtbook.com/smg/sae/22AE05
https://www.nxtbook.com/smg/sae/22AVE04
https://www.nxtbook.com/smg/sae/22AE04
https://www.nxtbook.com/smg/sae/22AE03
https://www.nxtbook.com/smg/sae/22AE02
https://www.nxtbook.com/smg/sae/22AVE01
https://www.nxtbook.com/smg/sae/21AE12
https://www.nxtbook.com/smg/sae/21AVE11
https://www.nxtbook.com/smg/sae/21AE10
https://www.nxtbook.com/smg/sae/21AVE09
https://www.nxtbook.com/smg/sae/21AE09
https://www.nxtbook.com/smg/sae/21AE08
https://www.nxtbook.com/smg/sae/21AVE07
https://www.nxtbook.com/smg/sae/21AE06
https://www.nxtbook.com/smg/sae/21AE05
https://www.nxtbook.com/smg/sae/21AVE05
https://www.nxtbook.com/smg/sae/21AE04
https://www.nxtbook.com/smg/sae/21AE03
https://www.nxtbook.com/smg/sae/21AVE03
https://www.nxtbook.com/smg/sae/21AE02
https://www.nxtbookmedia.com