Better Tech Podcast: Balancing Automation and Human Expertise in Data Ops
MLtwist COO Audrey Smith sat down with Better Tech host Peggy Tsai for a look into how people and machines work together to manage data. They discuss the twin challenges of data organization and data accuracy, and the balance between quality, time, and cost. Audrey also delves into how MLtwist prioritizes honest AI through transparency around their data sources.
Listen to the full episode here: https://www.youtube.com/watch?v=3iyt3Q39a8w
Intro: Insights into Operations
0:00
[Music]
0:12
hello and welcome back to another
0:13
episode of better Tech my name is Peggy
0:17
s and I’m looking forward to talking
0:19
about balancing Automation and human
0:22
expertise in data operations I’m pleased
0:26
to be joined by Audrey Smith the chief
0:29
operating officer of ml
0:31
twist hi Audrey welcome to better Tech
0:35
hi piggy thank you for having me today
0:38
it’s great to have you here because I
0:39
really want to share with the audience a
0:42
little bit about yourself and how you
0:45
became the Journey of becoming the Chief
0:48
Operating
0:50
Officer sure it’s my
0:54
pleasure and also love to hear more
0:56
about really your your passion for
0:59
balancing
1:00
human expertise with Automation in data
1:03
operation and really how that shaped
1:05
your career sure uh so I have a very um
1:10
uncommon career I think for data
1:12
operation person I started as a an
1:16
in-house lawyer um in France uh I studed
1:19
law for several years and then I quickly
1:23
decided to have an international
1:25
experience so I went to the UK for six
1:27
years staying in the field but when I
1:30
arrived in the US 10 years ago I really
1:32
wanted to go into Tech but then find my
1:35
way in with my um experience so I
1:38
decided to uh go back to um staring my
1:42
career all over again and um I stumbled
1:47
on data operation for AI very quickly I
1:50
got a job at a very famous company in
1:53
the cicon valley working on a voice
1:56
recognition app and I was actually as a
1:58
French speaker looking at the transcript
2:00
and looking if there are some updates
2:02
that were needed on the transcript uh in
2:05
French and that’s how I started uh
2:08
that’s how I learn about data labeling
2:11
for AI what it means what’s the power of
2:14
that what what it makes to like an
2:17
algorithm performance and and so on and
2:21
so I did that only for a few weeks but
2:23
enough to you know um really get excited
2:27
about that I then joined Google as um
2:30
the contractor for a few months this
2:32
time more on the quality control side of
2:34
things on several projects Google
2:35
shopping YouTube ggpr that was the
2:38
really beginning of gdpr and then after
2:41
that I joined Amazon that’s where really
2:44
I uh stayed for four years and learn
2:46
about data operation especially for
2:49
visual search uh but I worked on all
2:51
data types and was able to learn how to
2:55
handle um uh data quality for AI product
2:59
from
3:00
uh the time you get the data sent from
3:02
data scientist to the time you send it
3:05
back with annotation and high quality so
3:07
that they can uh they can train their
3:09
model on and that got really uh exciting
3:13
for me working on so many different
3:15
projects for Amazon I then joined
3:17
labelbox and that’s a startup that was
3:20
series a at the beginning uh when I
3:23
joined and I was the first person on
3:25
data operation Services helping Label
3:28
Box customers with the data operation uh
3:31
build a team around me um and that was
3:34
also very exciting to be in a startup
3:36
environment with everything SLE fast and
3:39
and the decision making M made on the
3:42
Fly sorry
3:45
um and then and then two years ago I
3:47
joined ml twist as uh its CEO and really
3:51
ml twist is at the heart of what you
3:53
were talking about which is the
3:55
crossroad between autom automization
3:58
automatization sorry and um and human
4:02
input how to make sure that everything
4:06
that can be automated in the process of
4:08
data operation for AI is automated but
4:12
also uh keeping an eye on the high
4:15
quality of the data that’s needed to
4:17
train a high performing model and how to
4:20
make sure that human input stays in the
4:22
loop and make sure that everything go
4:24
Smo U and that’s what we’ve been focused
4:26
on for the past two years and that has
4:28
been a great Jour of
4:30
H wow Audrey amazing amazing background
4:34
on a a lawyer turned technologist turn
Data Project Essentials
4:39
data and I loved how you um certainly
4:42
the influences of
4:43
gdpr um certainly um helped you along
4:46
the way I’m sure and it’s so fascinating
4:49
to hear about your um you know the you
4:54
know just your your knowledge of other
4:56
languages kind of gave you that
4:59
perspective on proper labeling and
5:01
tagging of data and I think that’s where
5:05
we see a lot of human errors right or
5:08
even errors with machine labeling is
5:11
that not really fully understanding the
5:13
language and the context and I’m sure as
5:17
you mentioned a lot of companies
5:19
struggle right to how they automate that
5:21
tagging and labeling and needing
5:24
requiring the the human intervention to
5:27
do that right Corrections um so from
5:30
your perspective and um if you could
5:33
elaborate what are the key factors
5:36
that’s that’s the right balance for
5:39
organizations that want to do all this
5:41
automation for the right data in their
5:44
pipelines but need the human
5:47
intervention to to validate and you know
5:50
verify that information as
5:52
well right that’s a great question I
5:54
think as a data operation person I have
5:57
always in mind three different metrics
5:59
that really matter to any data project
6:03
uh and that’s the quality of the data
6:05
the requirements what type of accuracy
6:08
level do you want to achieve some
6:10
companies are going to be okay with like
6:12
you know a low accuracy because they
6:14
want to go fast and they want to have
6:16
something out there and then they’re
6:17
going to improve over time uh you’re
6:19
going to have to look also at the time
6:21
how long do you have to deliver your
6:23
data and productionize your AI product
6:26
and the budget that you have in mind
6:28
obviously like you know you want to have
6:30
the highest quality possible but you
6:32
have a certain amount of money to spend
6:34
so you have to dance that dance you have
6:36
to look at those three metrics and see
6:40
how you can make it work somehow uh with
6:43
whatever you have in your hands the time
6:45
the money and and the quality that that
6:48
is required of you um and that’s that’s
6:51
something that’s going to change over
6:53
time depending on the organization
6:55
depending on the you know like the
6:57
priorities in inside your organization
6:59
in you need to make sure that you’re
7:00
going to be able to balance technology
7:03
with human input as you mentioned I
7:06
don’t believe that there is out there
7:08
one tool that’s going to make it all go
7:10
away when it comes to human input and I
7:13
would go even further I think that it we
7:16
should require to have an in human input
7:19
no matter what even if we think that the
7:21
highest quality possible is going to be
7:24
achieved uh just by using some tooling
7:27
which I don’t believe um you still need
7:30
to have that human eye on it and and why
7:33
just because of you know like what is
7:36
coming now uh and like that’s a hot
7:39
topic of discussion like responsible AI
7:41
how to make sure that you’re uh you’re
7:44
tacking the data bias in the data
7:46
sourcing that you’re doing and and and
7:49
the Machine is not going to talk to you
7:51
about that you need to have a human
7:52
input it’s it’s going to vary if you’re
7:54
talking to people also in different uh
7:57
countries with different cultural
7:59
background so it’s like you know like
8:01
it’s it’s a we are in a human society
8:04
and having a human input is definitely
8:06
something that should not go away over
8:08
time um and and I believe that we are
8:12
going to be able as a data operation
8:15
people in all those organization to be
8:17
able to balance all that all those
8:20
consideration and find the right balance
8:22
for each project that’s never going to
8:24
be the same from one use case to another
8:27
one um and as you mentioned like text
8:29
annotation it’s going to be a
8:31
requirement that’s going to be different
8:32
from image annotation and from video
8:35
annotation because technology is
8:37
evolving uh and you’re going to have
8:40
super tools out there that going to be
8:42
able to do most of the work even though
8:44
the last M should been still done by the
8:46
human human
8:48
eye those those are really great metrics
8:50
and I love what you said in the
8:52
beginning about threshold for data
8:54
quality that will determine you know how
8:57
much um you will let go be acceptable
9:00
from an automation point of view but do
9:03
I’m sure as the COO you have lots of
9:06
stories of scenarios or projects where
9:09
really the balance between human in
9:12
oversight and automation you know played
9:14
a significant role maybe either
9:17
positively or negatively the impacted
9:20
the outcome can can you share any
9:22
stories with us today yeah of course I I
9:25
have a lot I think MRI is all about like
9:28
trying to connect right tools to The
9:30
Right Use case and so when you you get a
9:33
customer with the new use case you look
9:35
into that you look at the tools that are
9:37
there you look at all this companies
9:38
that can like definitely talk about oh
9:41
we can do all the pre labeling we don’t
The Balancing Act
9:43
need any human input and so on and so we
9:45
test them that’s our job to see which
9:48
tool is going to be uh the best one uh
9:51
out there for the use space and one
9:53
example that’s particularly still True
9:56
to this day even though it happened to
9:57
me a few years back is uh videos um
10:01
recorded with uh drones you think at it
10:05
those drones that are not super that are
10:08
subject to like moving a lot depending
10:11
on the winds from you know left to right
10:14
and then depending on who is piloting
10:16
the draw that’s going to go up and down
10:18
when you’re a labeler and I think you
10:20
have to go straight as a labeler to
10:22
really understand what it means uh it’s
10:25
it’s a nightmare because you’re going to
10:27
have every single frame on video that’s
10:29
going to be different and so when you
10:32
think about it you have tools out there
10:34
that are not using interpolation or AI
10:37
enhance you know labeling to tell you
10:41
okay just use my tool and you’re going
10:43
to have annotation on every single frame
10:45
and then if there are some mistakes more
10:47
or less you can go and have a human like
10:49
a person who’s going to go and adjust
10:51
The annotation on certain frames only
10:54
and you’re going to be good to go for
10:56
drawing videos this is not the case and
10:59
as of today it’s still not the case
11:01
using pre-labeling or interpretation can
11:04
actually be very counterproductive uh
11:06
because every single frame for Dra
11:09
videos is going to be a little bit
11:11
different so you’re asking basically a
11:14
labeler to go and readjust every single
11:18
annotation on every single frame that’s
11:21
that’s just like going to cost you more
11:22
money that if you were doing 100% manual
11:25
work and that’s going to be uh obviously
11:28
very consuming so you have to adjust and
11:32
that’s that’s what I’m really like
11:33
trying to uh tell your audence is that
11:36
depending on the context depending on
11:37
the use case a tool is going to be your
11:41
ally or it can also like make your your
11:45
life modor so really being able to
11:47
assess your tooling before starting any
11:50
job is is is a mandatory I think yeah
11:54
that’s a it’s a great example and
11:56
certainly um some common sense should
11:59
also Al be um applied as well I mean
12:02
certainly as you mentioned there’s such
12:04
a rapid advancement of AI and all the
12:07
different machine learning Technologies
12:10
but as you said the human aspect needs
12:12
to play a continuous role um in in these
12:16
Technologies in your opinion are there
12:19
particular skills or mindsets that
12:22
professionals should cultivate uh to
12:25
stay
12:26
relevant
12:27
absolutely the first one I think is to
12:30
make sure that you stay up to date with
12:32
all the Technologies out there because
12:34
that keeps happening at one point we
12:36
were looking at we were tracking the
12:38
number of data labeling tools and it was
12:41
like almost like one every month that
12:44
was appearing uh with different
12:45
Technologies different capabilities
12:48
different focus on like data types and
12:50
so on and so I think it’s very important
12:52
to stay relevant to understand the
12:54
technology that’s out there and to
12:56
assess it and one easy way to just go
12:59
for it and then try it because a lot of
13:01
them are just like pre trial and you can
13:04
just like you know uh use it on like
13:06
some sample of your data and see what
13:08
what it does and what’s the performance
13:10
of it I think it’s also very important
13:12
to understand uh the difference between
13:15
the hype that you can see on LinkedIn
13:18
about all these like part post that you
13:20
see about new technologies coming to the
13:23
market that are supposed to
13:25
revolutionize everything uh one thing I
13:28
believe is it’s very true at the moment
Staying Relevant in Tech
13:30
is that there is a big gap between what
13:32
you can see on LinkedIn and what is
13:34
happening in like real life with your
13:35
customers working on AI nowadays they
13:38
will always be more cautious about
13:41
spending money on a new technology if
13:43
they don’t know what the r is going to
13:46
be and it’s going to look like it’s not
13:48
just that they’re going to jump on the
13:49
first you know uh new tech out there
13:52
they’re going to try it they’re going to
13:53
assess it it’s going to take time so
13:55
just be cautious that yeah things are
13:58
going to be a little bit more real in
14:01
like in your dayto day than what you can
14:03
see on LinkedIn so I think those two
14:06
things are very important and again um
14:09
being able to understand the importance
14:11
of responsibil in whatever you’re doing
14:14
I I’m very passionate about that and
14:16
being a as a data operation person a
14:20
human in the loop that is here to
14:22
orchestrate everything that’s going on
14:24
on the their side that you know data
14:27
souring matters and the work you’re
14:29
going to be selecting to work on your
14:31
data matters and the tool that is going
14:33
to be using like in terms of uh
14:36
certification for medical data are you
14:39
using a compliant tool and and so on and
14:42
so on so there are lot of different
14:43
things to take into consideration when
14:45
you’re at the heart of that uh of the
14:47
development of n product oh absolutely
14:51
and and certainly as you said that
14:53
learning curve is can be very steep but
14:57
it’s continuous right in terms of of all
14:59
the different Technologies um that’s
15:02
happening it’s very interesting fact
15:04
that you shared that there is one new
15:06
data labeling tool almost every month
15:09
and I’m not surprised right especially
15:11
with all the generative AI spurred a lot
15:13
of new startup company so there’s um
15:17
lots of New Opportunities out there but
15:21
let’s talk more specifically about uh ml
15:24
twist um specifically your company you
15:27
know it focuses on Ena in data
15:29
scientists to really concentrate on the
15:32
data science aspect of it rather than
15:34
the manual labeling um which can be you
15:37
know tedious so how do you ensure that
15:41
the automated aspects of your platform
15:44
integrate smoothly with the human
15:47
elements that you know like creativity
15:50
and problem solving that’s inherent also
15:52
in data science absolutely that’s a
15:55
great question we are here to help data
15:58
science we’re not here to replace them
16:00
if you look at the numbers out there
16:03
it’s there is a famous statement that
16:05
says that almost 80% of the work of the
16:07
DAT scientists right now is focused on
16:09
data King we want to shift that uh we
16:12
want to make sure that they spend more
16:14
time on as you mentioned the creative
16:17
part of their job making sure that they
16:19
have like uh some new ways to get the
16:23
data they need for uh for training their
16:25
models in in the best way possible uh
16:29
and so we want to be more focused on the
16:33
janitorial work uh that they shouldn’t
16:35
be doing in the first place which is the
16:37
Pre Plus processing of the data how to
16:40
make sure that the data is changed in
16:42
the right way is reformatted for the
16:45
tool that they’re going to be using
16:47
because we have to keep in mind that
16:48
each time that you’re going to connect
16:50
with the tool out there uh to to somehow
16:53
enhance your data you’re going to have
16:55
to reformat it through the format
16:57
accepted by that tool you’re never going
16:59
to be able to use the format that you
17:01
have you know and just like uh injected
17:04
in any tool that doesn’t work that way
17:07
so we’re automating all those pieces
17:10
that seem to be like kind of boring and
17:12
unsexy for a data scientist to work on
17:15
uh we are connecting um and creating
17:18
pipelines that’s connecting ml twist
17:20
platform to their cloud storage so that
17:22
we can take their data out automatically
17:25
they don’t need to do anything and we’re
17:27
going to inject it directly into the
17:28
tool that’s going to be the best for
17:30
their use case in the right format and
17:32
once the data has been labored by a
17:35
Workforce that they have or that we can
17:38
also provide um then we are reformatting
17:42
the data in the format they need to
17:44
train their model so all of that goes
17:47
away and they can just you know focus on
17:50
on what matters to them and what the
17:53
DISD for which is training a model uh
17:55
the best way possible I think that what
17:58
you just described is you know often you
18:01
know overlooked or you know
18:03
misunderstood one the data cleaning part
18:06
is is is very tedious and a huge part
18:09
and a waste of the talent right of your
18:12
PhD data scientist um and and secondly
18:16
the consistency that you can do at the
18:18
beginning of this data pipeline process
18:22
with the tagging and labeling is you
18:24
know provides tremendous value for the
18:27
downstream users um and either with
18:30
automating controls and policy so it’s
18:33
it’s really key and I think that a lot
18:36
of um Executives on the business side
18:39
kind of Overlook that so I I love how
18:42
you um highlighted that uh I know
18:45
earlier you talked about one of your
18:47
passion is responsible Ai and love for
18:51
you to discuss how ml twist is tackling
18:56
um responsible Ai and how data cards
19:00
will change the way AI companies will
19:02
take responsibility for their AI data
19:05
sourcing and development right um I
19:09
think it’s a very important piece that
19:11
we are developing at ml twist that’s
19:13
something also by the way that has been
19:14
tackled by Amazon that just like they
19:17
they um they also released that uh just
19:21
like couple of months ago but the idea
19:24
is like really simple is like having
19:26
that like ID that any company you can
19:29
show for any data set they’re working on
19:31
right and they’re going to be able on
19:33
that ID card to say hey this data is
19:36
coming from there it went to that tool
19:39
and it went to that type of uh of
Enhancing Data Science Workflow
19:42
Workforce and that actually was used by
19:45
Stanford um AI Department which is one
19:48
of our customers uh and they published a
19:50
research paper on text annotation
19:53
mentioning that they were using our data
19:56
Cs and so that’s I think very crucial
19:58
because
19:59
it’s very important and especially with
20:02
like the the advancement of AI that we
20:04
are able to somehow give some sort of
20:09
responsibility um you know back to the
20:12
the people who are developing the ey
20:13
product they need to be able to be
20:16
accountable for the way they are using
20:18
the data uh and that’s going to be even
20:22
we’re going to go uh further it’s it has
20:25
not been completely done yet but we want
20:27
to go even further with that and tackle
20:29
also uh data bias for instance we want
20:32
to be able to say that the workforce
20:34
that have been labeling the the project
20:38
as that type of gender or is like that
20:41
that age range um coming from that
20:44
country because we want to be able to
20:46
surface uh you know gaps uh if the data
20:50
is only labeled by men what does it do
20:53
to your data if it’s only labeled by
20:56
people that are very young like how
20:59
depending on the target the customer
21:02
Target for your AI product it’s going to
21:04
really matter if your product is going
21:05
to be able to react
21:07
positively uh you know to the to to your
21:10
Target and so that means that it has
21:12
also it has also to be done and label by
21:15
people that resemble your customers in
21:18
the end so that that’s that’s like the
21:21
future of the data C for m t but we have
21:23
been able already uh to create that ID
21:27
for the data set especially on the
21:28
tooling side and on uh the location of
21:31
the workforce uh that you you have been
21:35
using so essentially you’re providing
21:38
the
21:39
transparency right around around that
21:42
data to right reduce potential biases
21:46
and um you’re you’re you’re making
21:49
yourself very
21:50
accountable uh to to your your customers
21:54
and stakeholders so that’s that’s
21:56
excellent and um just lot of curiosity
AI: Regulation & Quality Assurance
21:59
uh certainly in in the US and in in
22:02
Europe there is a lot of uh you know
22:05
Frameworks around AI act the the AI act
22:09
and those principles uh the US you know
22:12
executive AI order does does ML twist
22:16
follow or is engaged with those
22:19
principles and priorities and and and
22:22
and embed that as well into your your
22:25
operations processes yeah absolutely I
22:28
mean like it’s very important to follow
22:30
what’s going on on the regulation uh
22:33
side of things for now we don’t have
22:35
clients in Europe but absolutely uh if
22:38
if like we get customers in Europe we’re
22:41
going to have to follow a regulation
22:43
when it comes to to AI um data
22:47
processing uh that’s also like some
22:49
discussions are going on on that and
22:52
where like absolutely the people were
22:54
talking to don’t want their data to
22:56
leave uh Europe so it’s it’s a very
22:59
important thing that we have in mind as
23:01
we progress and as we grow and expand
23:03
our uh the type of customers we’re
23:06
working with but yes we want to be part
23:08
of that we think it’s very important to
23:11
uh regulate the way AI is used and as
23:13
you mentioned transparency is the key
23:16
and we are like fully transparent U and
23:19
that that’s that’s our way to go about
23:22
uh working with our customers whatever
23:24
they want to know about what we’re doing
23:26
is available to them absolutely and I
23:28
think that goes certainly hand inand
23:30
with quality assurance right um there’s
23:33
always questions around you know the
23:36
quality the accuracy the completeness
23:38
when it comes to these type of um data
23:42
operations so how do you maintain a a
23:45
high standard of quality when it comes
23:47
to these automated processes I’m
23:49
certainly sure human oversight also
23:52
plays um plays a role as well so yeah
23:56
quality control is also some that we
23:59
have uh we have in mind at ml we have
24:02
developed quality control features on
24:04
our platform that can be uh very simple
24:07
but that are not like I’m surprised that
24:09
they are not done like more often but
24:12
essentially if you have uh once we get
24:16
the output back from the data labeling
24:18
tool we have a feature that helps FL or
24:22
send some warnings to the labeling
24:24
Workforce when something is different
24:27
from you know what is like an outlier in
24:30
in an entire data set how it was labeled
24:32
a certain way 90% of the time and like
24:35
those last 10% are like kind of
24:37
different it doesn’t mean that it’s
24:39
wrong it just means that there is that
24:41
sending a warning to the workforce to
24:43
get a final look and and make the final
24:47
call on what is wrong and what is not
24:49
and that tremendously impact the quality
24:52
overall of the data set just by having
24:54
those little things put in place another
24:57
example is is um we are watching a lot
25:01
on videos so I I talk a lot about that
25:03
but you know I mentioned that when we
25:06
get videos from our customers they are
25:09
in certain formats that they want to use
25:11
to train their model let’s say like it’s
25:12
a Coco format when that video enters the
25:16
data labeling tool it has to be
25:17
reformatted in the in the format that is
25:20
accepted by this data labeling tool for
25:22
very heavy videos the quality control
25:25
cannot be done directly on the platform
25:27
because uh it’s just lagging uh you’re
25:30
dealing with you know work forces that
25:33
are not necessarily with like don’t have
25:35
like very strong connections because
25:38
they are based in other parts of the
25:39
world they cannot do a very thorough
25:42
quality control on those videos directly
25:45
on the tool it just doesn’t work so we
25:48
have developed a feature that’s going to
25:50
um actually get output back we format it
25:54
in the Coco format that our customer
25:55
want it to use and then we create the
25:59
movie uh containing the annotations done
26:03
by the labelers and they can just watch
26:05
the the movie and and see very um what
26:09
the customer essentially is going to see
26:11
when they’re going to check the the work
26:13
uh and they are able very easily to
26:15
check that you know an annotation has
26:17
not been done correctly on a certain
26:18
frame and they can go back to the
26:19
labeling tool and and so on and that’s
26:22
like a very iterative process that’s
26:24
like really impacting the quality of the
26:26
work and they cannot actually that that
26:29
has become like key to the Quality
26:31
Control process on the Workhorse side
26:33
that’s not something they can just not
26:35
do and we’re very proud that we’re able
26:38
to provide that uh to the workforce to
26:40
help them do a better job very
26:43
interesting I I think don’t I don’t
26:45
think most people um understand the the
26:48
depth that takes especially for tiing um
26:51
videos um traditionally data labeling
26:54
has been mostly on text right text
26:56
formats so um other types of formats you
27:00
know requires a a different level of
27:03
quality assurance so um great example um
27:06
so Audrey I know our our time uh has
27:10
flown by so quickly I believe it for um
27:14
great discussion that we’ve been having
Key Insights for Data Operations
27:17
um but just just to close it out I’m I’m
27:19
sure um you know our listeners uh have
27:22
certainly learned a lot when it comes to
27:25
Automation and human expertise in data
27:28
operations any advice you would want to
27:30
share uh or insights you want to share
27:33
with with our audience as your as your
27:35
closing comments for today sure so uh
27:38
two things the first thing is a reminder
27:40
about the dance you need to dance I like
27:43
seeing it that way but like really keep
27:45
in mind the two metrics that really
27:46
matter uh data quality uh time and
27:50
budget and and try to balance them out
27:53
based on on your priorities the second
27:56
thing is that I creating with uh created
27:58
with few other data operation persons a
28:01
group called Data Ops for AI on LinkedIn
28:03
we are like around 250 people now and
28:06
we’re just discussing we’re just like
28:09
asking for advice there are sometimes
28:11
where I get new data So lately I’ve been
28:14
working on 3D annotation like something
28:16
very new to me so how do I go about it
28:19
what tool would be the best and so on
28:21
and I got like a tremendous amount of
28:22
information from that group so anyone
28:25
interested to know a bit more about data
28:27
operation and how how to go about it h
28:29
can join our group and and uh share
28:32
their knowledge and and ask their
28:34
questions fantastic certainly encourage
28:37
everyone to um follow connect with you
28:40
Audrey Smith ml text on on LinkedIn and
28:44
and join the data operations group as
28:46
well that sounds very interesting I
28:48
think I’ll do the same as
28:51
well well thank you so much thank you so
28:54
much Audrey for your time today and um
28:58
hope everyone enjoyed today’s episode of
29:00
better Tech and looking forward to
29:02
seeing you again at the crossroads of
29:05
technology and Innovation thank you so
29:08
much for your time thank
29:10
[Music]
29:16
you