In this episode, Ted sits down with Kevin Frazier, AI Innovation and Law Fellow at UT Law, to discuss the critical role of AI literacy and regulation in the legal industry. From understanding the limitations of AI models to navigating the challenges of a patchwork of state-level laws, Kevin shares his expertise in AI policy, legal education, and emerging tech governance. Highlighting the need for knowledge diffusion and clearer national frameworks, this conversation explores what today’s AI developments mean for law professionals and future practitioners alike.
In this episode, Kevin shares insights on how to:
Build AI literacy within law schools and legal practice
Understand the regulatory landscape shaping AI deployment
Navigate the risks of inconsistent state laws on AI
Leverage knowledge diffusion to use AI more effectively
Prepare the next generation of lawyers for an AI-driven profession
Key takeaways:
AI literacy is essential for law students and practitioners to use AI responsibly
A patchwork of state laws could create major compliance challenges for businesses
National-level AI regulation and clear frameworks are urgently needed
Law schools play a critical role in preparing lawyers to adapt to emerging technologies
About the guest, Kevin Frazier
Kevin Frazier is the AI Innovation and Law Fellow at the University of Texas School of Law, where he focuses on helping law students and professionals build AI literacy for the future of legal practice. He is also the co-host of the Scaling Laws podcast and a Senior Editor at Lawfare. Before entering academia, Kevin clerked on the Montana Supreme Court and contributed research at the Institute for Law and AI — and he shares his latest insights on AI through his Substack, Appleseed AI.
“I have never met an AI expert. And in fact, if I meet an AI expert, that’s the surest sign that they’re not because this technology is moving too quickly.”
1
00:00:03,211 --> 00:00:05,128
Kevin Frazier, how are you today?
2
00:00:05,506 --> 00:00:06,216
Doing well, Ted.
3
00:00:06,216 --> 00:00:07,423
Thanks for having me on.
4
00:00:07,423 --> 00:00:08,794
Yeah, I'm excited.
5
00:00:08,794 --> 00:00:21,101
is, um you and I had a conversation, couple of, actually it was this week, and talked
about some of the new AI regulation that was pending and we're gonna discuss the outcome.
6
00:00:21,202 --> 00:00:24,724
And today is July 3rd.
7
00:00:24,724 --> 00:00:27,325
So, and I think this episode is gonna get released next week.
8
00:00:27,325 --> 00:00:29,266
So this will be very timely information.
9
00:00:29,266 --> 00:00:32,819
um But before we get into that, let's get you introduced.
10
00:00:32,819 --> 00:00:33,779
You're a...
11
00:00:33,875 --> 00:00:37,455
AI researcher and um an academic.
12
00:00:37,455 --> 00:00:41,435
Why don't you tell us a little bit about who you are, what you do, and where you do it.
13
00:00:41,474 --> 00:00:50,778
Yeah, so I'm based here in Austin, land of tacos, bats, and now the AI Innovation and Law
program here at the University of Texas School of Law.
14
00:00:50,778 --> 00:00:57,501
So I'm the school's inaugural AI Innovation and Law fellow, which is super exciting.
15
00:00:57,501 --> 00:01:09,186
So I get to help make sure that all of the students here at UT are AI literate and ready
to go into the legal practice, knowing the pros and cons of AI and how best to help their
16
00:01:09,186 --> 00:01:10,018
clients.
17
00:01:10,018 --> 00:01:13,770
And also to contribute to some of these important policy conversations.
18
00:01:13,770 --> 00:01:21,564
So my background is uh doing a little bit of everything in the land of emerging tech
policy.
19
00:01:21,564 --> 00:01:23,745
So I worked for Google for a little stint.
20
00:01:23,745 --> 00:01:27,027
um I've worked for the government of Oregon.
21
00:01:27,027 --> 00:01:29,649
I was a clerk on the Montana Supreme court.
22
00:01:29,649 --> 00:01:31,069
I taught law at St.
23
00:01:31,069 --> 00:01:32,830
Thomas University college of law.
24
00:01:32,830 --> 00:01:40,014
And I did some research for a group called the Institute for law and AI, but now I get to
spend my full time here at UT.
25
00:01:40,130 --> 00:01:48,002
teaching AI, writing about AI, and like you, podcasting about AI for a little podcast
called Scaling Law.
26
00:01:48,002 --> 00:01:51,106
So like you, I can't get enough of this stuff.
27
00:01:51,137 --> 00:01:51,978
Absolutely, man.
28
00:01:51,978 --> 00:01:53,039
I'm I'm jealous.
29
00:01:53,039 --> 00:02:00,007
I wish this is like a very part-time gig for me Like I still have a day job, but your day
job sounds awesome uh
30
00:02:00,007 --> 00:02:01,610
can't believe I get to do this.
31
00:02:01,610 --> 00:02:03,334
It's the best job ever.
32
00:02:03,334 --> 00:02:11,489
And hopefully you find me Ted buried here outside the law school and I will be a my
tombstone will read he did what he was excited by.
33
00:02:11,489 --> 00:02:13,329
That's good stuff.
34
00:02:13,909 --> 00:02:23,509
Well, I guess before we jump into the agenda, I'm encouraged to hear that law schools are
really moving in this direction.
35
00:02:23,589 --> 00:02:35,209
I saw a stat from the ABA that I think was in December that said around, it was just over
50 % of law schools even had a formal AI course.
36
00:02:35,989 --> 00:02:38,629
So I've had many.
37
00:02:38,933 --> 00:02:54,316
professors on the podcast and we have commiserated over really the lack of preparedness
that, you know, new law grads um have when it comes to really understanding the
38
00:02:54,316 --> 00:02:55,207
technology.
39
00:02:55,207 --> 00:03:06,135
And, you know, we also have a dynamic within the industry itself where, you know,
historically clients have subsidized new associate training, you know, through, um you
40
00:03:06,135 --> 00:03:08,497
know, the, the, the mentorship.
41
00:03:08,673 --> 00:03:15,388
program that uh Big Law has for new associate development.
42
00:03:15,388 --> 00:03:19,500
So it's really encouraging to hear that this is taking place.
43
00:03:19,906 --> 00:03:25,579
Yeah, no, I couldn't be more proud of the UT system as a whole leaning into AI.
44
00:03:25,579 --> 00:03:37,116
Actually, last year here in Austin was the so-called Year of AI, where the entire campus
was committed to addressing how are we going to adjust to this new technological age.
45
00:03:37,116 --> 00:03:47,412
here at the law school, Dean Bobby Chesney has made it clear that as much attention as the
Harvards get, the Stamfords get, the NYUs get,
46
00:03:47,466 --> 00:03:56,634
Austin's really a spot where if you want to go find a nexus of policymakers, venture
capitalists, and AI developers, you're going to find them in Austin.
47
00:03:56,634 --> 00:04:07,142
And so this is really a spot that students can come to, scholars can come to, community
members can come to, and find people who are knowledgeable about AI.
48
00:04:07,142 --> 00:04:12,967
And I think critically, something that you and I discussed earlier, curious about AI.
49
00:04:12,967 --> 00:04:16,834
One of my tired lines, my wife, if she ever listens to this,
50
00:04:16,834 --> 00:04:19,275
will say, my gosh, you said it again.
51
00:04:19,414 --> 00:04:21,657
I have never met an AI expert.
52
00:04:21,657 --> 00:04:29,223
And in fact, if I meet an AI expert, that's the surest sign that they're not because this
technology is moving too quickly.
53
00:04:29,223 --> 00:04:30,523
It's too complex.
54
00:04:30,523 --> 00:04:37,658
And anyone who thinks they have their entire head wrapped uh around this is just full of
hooey, in my opinion.
55
00:04:37,658 --> 00:04:46,402
And so it's awesome to be in a spot where everyone is committed to working in an
interdisciplinary fashion and a practical fashion, to your point.
56
00:04:46,402 --> 00:04:49,695
so that they leave the law school practice ready.
57
00:04:49,695 --> 00:04:56,830
Yeah, and I mean, to your point about, you know, no AI experts, the Frontier Labs don't
even know really how these models work.
58
00:04:56,830 --> 00:05:09,509
I think Anthropic has done uh probably the best job of all the Frontier Labs really
digging in and creating transparency around how these models really work, their inner
59
00:05:09,509 --> 00:05:12,551
workings and how they get to their output.
60
00:05:12,551 --> 00:05:18,156
But yeah, I mean, these things are still a bit of a black box, even for the people who
created them.
61
00:05:18,156 --> 00:05:18,796
Right.
62
00:05:18,796 --> 00:05:30,075
no, I've had wonderful conversations with folks like Joshua Batson at Anthropic, who was
one of the leading researchers on their mechanistic interoperability report, where they
63
00:05:30,075 --> 00:05:35,779
went and showed, for example, that their models weren't just looking at the next best
word.
64
00:05:35,779 --> 00:05:44,796
That's kind of the usual way we like to try to dumb down LLMs is to just say, oh, you
know, they're just looking at the next best word based off of this distribution of
65
00:05:44,796 --> 00:05:45,876
training data.
66
00:05:45,976 --> 00:05:55,624
But if you go read that report and they write it in accessible language and it is
engaging, it is a little lengthy, but you know, maybe throw it into notebook LOM and, you
67
00:05:55,624 --> 00:05:57,655
know, make that a little easier.
68
00:05:57,776 --> 00:06:04,661
But you see these models are actually when you ask them to write you a poem, they're
working backwards, right?
69
00:06:04,661 --> 00:06:13,288
They know what word they're going to end a sentence with and they start thinking through,
okay, how do I make sure I tee myself up to get this rhyming pattern going?
70
00:06:13,288 --> 00:06:16,000
And that level of sophistication is just
71
00:06:16,000 --> 00:06:16,944
scraping the surface.
72
00:06:16,944 --> 00:06:22,537
There's so much beneath this iceberg and it's a really exciting time to be in this space.
73
00:06:22,537 --> 00:06:34,668
Yeah, and you know, they've also been transparent around the um not so desirable human
characteristics like deception that these LLMs exhibit.
74
00:06:34,668 --> 00:06:49,473
And I think that's also a really important aspect for people to understand for users of
the system so they can have awareness around the possibilities and really have a lens um
75
00:06:49,473 --> 00:06:53,733
Yeah, a little bit of a healthy skepticism about what's being presented.
76
00:06:53,793 --> 00:06:56,033
it's, they've done a fantastic job.
77
00:06:56,033 --> 00:06:57,473
I'm a big anthropic fan.
78
00:06:57,473 --> 00:07:04,753
use, you know, it's Claude, Gemini and Chad GBT are my go-tos and I use them all for
different things.
79
00:07:04,753 --> 00:07:08,653
But, you know, I will, I probably use Claude the least.
80
00:07:08,653 --> 00:07:10,693
I'm doing a lot more with Gemini now.
81
00:07:10,693 --> 00:07:12,773
Gemini is blowing my mind.
82
00:07:12,793 --> 00:07:17,237
But I will continue to support them with my $20 a month.
83
00:07:17,237 --> 00:07:22,503
because I just love the work that they're doing and really appreciate all the transparency
they're creating.
84
00:07:22,626 --> 00:07:26,879
think their writing with Claude is just incredible.
85
00:07:26,879 --> 00:07:36,895
To be able to tell Claude, for example, what style of writing you want to go forward with
and to be able to train it to focus on your specific writing style is exciting.
86
00:07:36,895 --> 00:07:42,798
But to your point, it's also key to just have folks know what are the key limitations.
87
00:07:42,798 --> 00:07:49,402
So for example, sycophancy has become a huge concern across a lot of these models.
88
00:07:49,794 --> 00:07:55,618
Favorite example is you can go in and say, hey, write in the style of the Harvard Law
Review.
89
00:07:55,658 --> 00:08:04,124
And for folks who aren't in the uh legal scholarship world, obviously getting anything
published by the Harvard Law Review would be wildly exciting.
90
00:08:04,124 --> 00:08:09,157
You'll enter some text and you'll say, all right, give me some feedback from the
perspective of the Harvard Law Review.
91
00:08:09,157 --> 00:08:13,320
And oftentimes you'll get, my gosh, this is excellent.
92
00:08:13,320 --> 00:08:16,472
There is no way the Law Review can turn you down.
93
00:08:16,472 --> 00:08:19,614
And I think you've nailed it on the head, but.
94
00:08:19,650 --> 00:08:25,277
When you have that sophistication to be able to know, okay, it may be a little
sycophantic, I can press it though, though.
95
00:08:25,277 --> 00:08:29,442
I can nudge it to be more of a harsh critic.
96
00:08:29,442 --> 00:08:39,617
And once you have that level of literacy, these tools really do have just so much
potential to transform your professional and personal uh approach to so many tasks.
97
00:08:39,617 --> 00:08:43,537
Didn't OpenAI roll back 4.5 because of this?
98
00:08:43,640 --> 00:08:44,841
Too nice, too nice.
99
00:08:44,841 --> 00:08:48,514
was too, yeah, just giving everyone too many good vibes.
100
00:08:48,514 --> 00:09:01,335
And I think that speaks to the fact that there is always going to be some degree of a role
for a human, especially in key relationships where you have mentors, where you have close
101
00:09:01,335 --> 00:09:05,548
companions, where you have loved ones who are able to tell you the hard truth.
102
00:09:05,548 --> 00:09:07,360
That's what makes a good friend, right?
103
00:09:07,360 --> 00:09:13,094
And a good teacher and a good uh partner is they can call you out on your BS.
104
00:09:13,450 --> 00:09:19,315
AI, it's harder, it's proven a little bit more difficult to make them uh more
confrontational.
105
00:09:19,315 --> 00:09:20,815
Yeah, 100%.
106
00:09:20,815 --> 00:09:27,210
Well, when we spoke earlier in the week, there was some pending legislation that you and I
talked about that I thought was super interesting.
107
00:09:27,251 --> 00:09:39,449
And the implications are, you know, um really hard to put words around, you know, had that
piece of legislation, that part of the legislation passed.
108
00:09:39,449 --> 00:09:42,361
And that was um
109
00:09:43,086 --> 00:09:51,475
I'll let you explain it because you're much closer to it, but it was essentially a 10-year
moratorium around state-level legislation around AI.
110
00:09:51,475 --> 00:09:56,620
Tell us a little bit about what was proposed and then ultimately where it landed.
111
00:09:56,920 --> 00:10:11,041
Yeah, so as part of the one big, beautiful budget bill, we saw in the House version of
that bill a 10-year moratorium on a wide swath of state AI regulations.
112
00:10:11,182 --> 00:10:23,091
And the inclusion of that language was really out of a concern that we could see, like we
have in the privacy space, a sort of patchwork approach to a key area of law.
113
00:10:23,111 --> 00:10:26,784
And if you go do economic analysis and look at
114
00:10:26,798 --> 00:10:36,224
Who is most implicated by California having one set of privacy standards and New York
having a different set and Virginia having its own and Washington having its own?
115
00:10:36,224 --> 00:10:38,005
Who does that actually impact?
116
00:10:38,005 --> 00:10:49,271
Well, in many cases, it tends to be small and medium sized businesses because they don't
have huge compliance offices, for example, or even the businesses that are just nearing
117
00:10:49,271 --> 00:10:52,653
the threshold of being implicated by those privacy laws.
118
00:10:52,653 --> 00:10:54,126
They too have to start
119
00:10:54,126 --> 00:11:02,900
hiring outside counsel, they have to be monitoring what their employees are doing to make
sure they comply with the nuances of each of these state bills.
120
00:11:02,900 --> 00:11:10,663
And so a lot of folks are concerned that we may see a similar patchwork apply in the AI
context.
121
00:11:10,663 --> 00:11:21,068
If every state is thinking through how are we gonna regulate AI differently, how do we
define AI has even proven to be a difficult challenge among state legislators.
122
00:11:21,110 --> 00:11:29,316
And so we saw the house say, all right, we're going to move forward with a 10 year
moratorium on specific state AI regulation.
123
00:11:29,316 --> 00:11:35,340
Now it's important to note that the language in the house bill was wildly unclear.
124
00:11:35,340 --> 00:11:43,286
I'm not sure who wrote the legislation, uh but yeah, you know, they could have used some
help from the drafting office.
125
00:11:43,286 --> 00:11:49,570
It was, it was a bit uh unfortunate because that muddled language added a lot of confusion
about
126
00:11:49,570 --> 00:11:54,553
how that moratorium would work in practice, and what state laws would actually be
implicated.
127
00:11:54,553 --> 00:12:08,981
The thing that the proponents of this moratorium were aiming for was that there would be a
ban or a pause on state regulation that was specific to AI.
128
00:12:08,981 --> 00:12:17,846
And so this was really out of a concern that, again, we would have uh myriad standards,
myriad definitions applying to AI development itself.
129
00:12:17,912 --> 00:12:28,805
but it didn't want to capture some of the general consumer protection laws that we know
are so important to uh making sure everyone can, for example, buy a home without being
130
00:12:28,805 --> 00:12:38,128
discriminated against, be hired or fired without being discriminated against, prevent
businesses from using unfair or deceptive business practices.
131
00:12:38,128 --> 00:12:41,648
So that was the kind of background of the house language.
132
00:12:41,689 --> 00:12:46,930
Well, as with all bills, we saw the house language then move into the Senate.
133
00:12:47,014 --> 00:12:59,311
And the Senate saw a pretty crazy, I think that's the only word that can be used to
describe this, a pretty crazy debate occur between Senator Cruz, who was one of the main
134
00:12:59,311 --> 00:13:11,047
proponents of the moratorium, and Senator Marsha Blackburn from Tennessee, who had
concerns that the moratorium might prohibit enforcement of the Elvis Act.
135
00:13:11,047 --> 00:13:16,960
Now, the Elvis Act is one of these AI specific laws that the Tennessee legislature passed.
136
00:13:16,962 --> 00:13:27,928
with a specific goal of making sure that uh the creators, the musicians, all those folks
we associate with Nashville and Tennessee would have their name, image, and likeness
137
00:13:27,928 --> 00:13:37,273
protected as a result of perhaps training on their music uh and even producing deep fakes
of their songs and things like that.
138
00:13:37,273 --> 00:13:43,817
So there was a debate and a compromise was reached between Senator Blackburn and Senator
Cruz.
139
00:13:43,817 --> 00:13:46,918
They reduced it to a five-year moratorium.
140
00:13:46,946 --> 00:13:55,830
They made sure that the language of the moratorium was compliant with some procedural
hurdles, which is a whole nother can of worms.
141
00:13:55,830 --> 00:14:04,334
Basically, if you have a budget bill, there has to be a budgetary ramification of the
language in each provision of that budget bill.
142
00:14:04,334 --> 00:14:11,117
So now the moratorium was connected to uh broadband funds and AI deployment funds.
143
00:14:11,117 --> 00:14:14,918
And so all of sudden, we just got this really crazy
144
00:14:14,968 --> 00:14:17,681
combination of ideas and concerns.
145
00:14:17,681 --> 00:14:27,649
And ultimately the Senate decided by a vote of 99 to one to just strip that language out
of the one big beautiful bill.
146
00:14:27,649 --> 00:14:34,596
So as it stands, we continue to have Congress grappling with how best to proceed.
147
00:14:34,596 --> 00:14:42,252
Congress has really only enacted one AI specific law, the Take It Down Act, which pertains
to deep fakes.
148
00:14:42,498 --> 00:14:46,822
But besides that, we're still left asking, what is our national vision for AI?
149
00:14:46,822 --> 00:14:51,486
Where are we going to go with this huge regulatory issue?
150
00:14:51,747 --> 00:14:56,491
And in that sort of regulatory void, we now have 50 states.
151
00:14:56,491 --> 00:14:59,694
Across those states, there are hundreds of AI bills.
152
00:14:59,694 --> 00:15:05,670
Depending on who you ask, it's anywhere from 100 to 200 really specific AI bills.
153
00:15:05,670 --> 00:15:08,290
That's Steven Adler's analysis.
154
00:15:08,290 --> 00:15:18,868
Whereas if you go talk to someone like Adam Thayer at R Street, he'll tell you there are
hundreds, if not a thousand or more AI pieces of legislation pending before the states.
155
00:15:18,868 --> 00:15:25,062
And so it seems as though we may be on the precipice of a sort of AI patchwork.
156
00:15:25,249 --> 00:15:32,749
Yeah, and to your point, that sounds really difficult for businesses and commerce to
navigate.
157
00:15:32,749 --> 00:15:37,749
And I'm wondering, have we just kicked the can down the road?
158
00:15:37,749 --> 00:15:50,489
Because the path of each state making its own unique set of rules sounds completely
unsustainable from where I sit as a business owner and someone who uses the technology
159
00:15:50,489 --> 00:15:51,949
every day.
160
00:15:52,649 --> 00:15:53,789
Is that?
161
00:15:53,865 --> 00:16:05,317
You know, have we just postponed the Fed, you know, stepping in and making some rules or
is this, are we, is the status quo going to be around for a little while?
162
00:16:05,317 --> 00:16:06,252
Do we know?
163
00:16:06,252 --> 00:16:16,432
Yeah, if I had to bet and I'll preface by saying I'm not a betting man because if you
check my March Madness bracket uh each April, you'll see what a disaster it is.
164
00:16:16,633 --> 00:16:28,925
But if you look at the current political winds, I think we're going to see at least a
handful of states uh like New York with the Raise Act sponsored by Assemblymember Boris.
165
00:16:28,925 --> 00:16:30,314
uh
166
00:16:30,314 --> 00:16:42,004
If we look at Colorado, which is actively working towards implementing the Colorado AI
Act, and if we look toward California, which has already passed a bevy of AI specific
167
00:16:42,004 --> 00:16:45,146
laws, this patchwork is coming.
168
00:16:45,146 --> 00:16:50,731
And so when that patchwork does develop, we have a couple questions to ask.
169
00:16:50,731 --> 00:16:52,933
And this is my concern.
170
00:16:52,933 --> 00:16:59,878
So if you talk to folks about laboratories of democracy, they'll tell you this is exactly
how
171
00:17:00,002 --> 00:17:01,323
federalism supposed to work.
172
00:17:01,323 --> 00:17:01,973
This is great.
173
00:17:01,973 --> 00:17:08,527
We have states experimenting with different novel approaches to a tricky regulatory
solution.
174
00:17:09,008 --> 00:17:14,332
Well, the issue there is that AI isn't contained by state borders, right?
175
00:17:14,332 --> 00:17:24,158
This isn't something like regulating a specific school district in your community or
regulating a specific natural resource that's just in your state.
176
00:17:24,376 --> 00:17:33,789
how you regulate AI can have huge ramifications on how AI is developed and deployed across
the entire country.
177
00:17:33,789 --> 00:17:42,311
And so I think that's one key element to point out is that laboratories of democracy imply
that they're operating in Petri dishes.
178
00:17:42,311 --> 00:17:44,451
And yet these Petri dishes have been broken.
179
00:17:44,451 --> 00:17:50,693
And so one state's AI regulation is going to flood into and impact other states.
180
00:17:50,893 --> 00:17:54,434
Another key thing to point out about laboratories
181
00:17:54,474 --> 00:18:00,357
and I'm a sucker for puns and metaphors, so apologize for leaning so heavily into this.
182
00:18:00,437 --> 00:18:05,460
But when you think about laboratories, you're talking about experiments, right?
183
00:18:05,480 --> 00:18:12,624
Well, experiments imply that you're going to learn from and adjust and change based off of
the results.
184
00:18:12,764 --> 00:18:21,889
But something we don't see in a lot of these state laws are things like sunset clauses,
things that would say, okay, we're gonna try this law for two years.
185
00:18:21,889 --> 00:18:23,810
At the end of the two years, we're going to
186
00:18:23,810 --> 00:18:28,332
reevaluate, should we move forward with this legislation or should we change it?
187
00:18:28,332 --> 00:18:40,437
We don't see huge outlays, huge investments in things like retrospective review, where we
would perhaps identify outside stakeholders and independent experts to evaluate whether
188
00:18:40,437 --> 00:18:42,418
that legislation worked as intended.
189
00:18:42,418 --> 00:18:47,950
If we had those safeguards in place to be able to say, was this a good idea in retrospect?
190
00:18:47,950 --> 00:18:52,300
Should we move forward with this or do we need to go back to the drawing board?
191
00:18:52,300 --> 00:18:56,734
I think that would make a lot of folks who are concerned about this patchwork more
comfortable.
192
00:18:56,734 --> 00:19:06,652
And I hope that state legislators consider investing in and moving forward with that sort
of, with those sorts of safeguards, but I haven't seen that so far.
193
00:19:06,685 --> 00:19:07,276
Interesting.
194
00:19:07,276 --> 00:19:18,906
And then how do, I don't know if the New York Times suit against OpenAI was in federal
court or state court, but you know, there was a ruling where they had to essentially
195
00:19:18,906 --> 00:19:27,173
retain history for a certain period of time that created all sorts of other unintended
consequences.
196
00:19:27,173 --> 00:19:33,628
Like how, how are we going to navigate scenarios like, like that in the current state?
197
00:19:33,858 --> 00:19:42,264
Yeah, so right now the pending legislation, excuse me, the pending litigation between the
New York Times and OpenAI, that's in federal district court.
198
00:19:42,264 --> 00:19:54,822
And this preservation requirement of basically saving uh queries that have been entered to
OpenAI has caused a lot of alarm bells to go off, especially in the legal community.
199
00:19:54,822 --> 00:20:03,378
I've already talked to folks at uh various firms who say that they've had partners,
they've had clients coming to them and saying, see,
200
00:20:03,416 --> 00:20:06,188
This is exactly why we shouldn't use AI.
201
00:20:06,188 --> 00:20:16,756
And uh now we see that our queries may be retained and who knows what that means for
maintaining client confidentiality and attorney-client privilege.
202
00:20:16,756 --> 00:20:20,118
And so this has opened up a pretty big can of worms.
203
00:20:20,118 --> 00:20:25,542
And this all speaks to the fact that we need some regulatory clarity.
204
00:20:25,643 --> 00:20:29,545
We know that when we have a absence of...
205
00:20:29,545 --> 00:20:30,816
uh
206
00:20:30,816 --> 00:20:43,513
safeguards and an absence of knowledge about how and when laws are going to be enforced or
how especially outdated and antiquated rules and norms in various professions, how those
207
00:20:43,513 --> 00:20:49,897
are going to be applied in this new novel context, really adds to an unhelpful degree of
ambiguity.
208
00:20:49,897 --> 00:20:59,822
And um it's also important to note that should we feel comfortable from a bigger D
democracy question with the fact that
209
00:20:59,822 --> 00:21:07,949
one judge sitting in a federal district court is upending a lot of use cases of AI right
now.
210
00:21:07,949 --> 00:21:09,251
A lot of people are skeptical.
211
00:21:09,251 --> 00:21:10,852
A lot of people are scared.
212
00:21:10,852 --> 00:21:22,342
And this is another reason why we should be having a national conversation about AI and
pressing Congress's feet to the fire to say, need a national vision.
213
00:21:22,342 --> 00:21:26,668
We need clarity so that we can prevent this sort of patchwork approach.
214
00:21:26,668 --> 00:21:37,257
And so that courts know how to proceed rather than kind of uh seemingly developing some
unclear uh steps via these bespoke pieces of litigation.
215
00:21:37,257 --> 00:21:42,608
Yeah, and like how does that impact OpenAI relative to its competitors?
216
00:21:42,608 --> 00:21:49,710
Like, you know, I actually do a fair amount of legal analysis in the AI models for a
variety of things.
217
00:21:49,710 --> 00:21:59,643
If I have a new hire and they have a non-compete, they have non-compete language that I
have to figure out, navigate my way through, or, you know, we're dealing with an operating
218
00:21:59,643 --> 00:22:04,644
agreement amendment right now amongst the partners at InfoDash and I have been digging
deep.
219
00:22:04,644 --> 00:22:07,295
I've been using other models
220
00:22:07,295 --> 00:22:18,416
because I don't want, mean, it feels like it's really putting um a burden on OpenAI
relative to its competitors.
221
00:22:18,416 --> 00:22:20,157
Is that accurate?
222
00:22:20,248 --> 00:22:34,762
Yeah, I don't have specific insight into whether their monthly average user count, for
example, has taken a hit or if uh we've seen any major changes to their clientele,
223
00:22:34,762 --> 00:22:37,783
especially with respect to large enterprises.
224
00:22:37,803 --> 00:22:41,254
My hunch is that things have definitely slowed.
225
00:22:41,254 --> 00:22:47,535
I know a lot of companies are using CoPilot and they're saying, my gosh, why are we using
CoPilot?
226
00:22:47,535 --> 00:22:50,038
Can we find anything else to switch to, which is
227
00:22:50,038 --> 00:22:51,698
a whole nother conversation.
228
00:22:51,918 --> 00:22:56,980
And they probably initially were saying, great, let's just go to OpenAI.
229
00:22:56,980 --> 00:23:05,273
But the second you get a lawyer in the room who's aware of this preservation request and
worried about that language and worried about this perhaps occurring again in the future,
230
00:23:05,273 --> 00:23:07,983
that may slow things down.
231
00:23:07,983 --> 00:23:14,305
So I think you're right to say this minimally isn't helping increase OpenAI's user base.
232
00:23:14,305 --> 00:23:16,876
ah I will say that the
233
00:23:16,876 --> 00:23:24,010
the sheer number of users they already have and the sophistication of 03, for example, and
just kind of the head start they've maintained.
234
00:23:24,110 --> 00:23:36,918
I don't think this is catastrophic for OpenAI, but if anything, I think it's more more
headwinds to the industry as a whole uh that, you know, kind of validates, rightly or
235
00:23:36,918 --> 00:23:43,379
wrongfully, concerns about whether these are viable tools for the long term for uh
professionals.
236
00:23:43,379 --> 00:23:51,678
Yeah, and it also brings up another interesting um dynamic, is, is it going to increase
investments?
237
00:23:51,678 --> 00:23:55,322
I think these things benefit Metta, um right?
238
00:23:55,322 --> 00:24:02,430
And the open source scenarios that you can self-host and essentially control the
environment in which you engage.
239
00:24:02,430 --> 00:24:04,620
um I don't know.
240
00:24:04,620 --> 00:24:05,983
Do you agree?
241
00:24:06,254 --> 00:24:07,834
I would definitely agree.
242
00:24:07,834 --> 00:24:13,574
think that the future will probably look a lot more open source.
243
00:24:13,574 --> 00:24:21,254
know that, fortunately, Sam Altman has tipped his hand and said that OpenAI wants to go
the open source route.
244
00:24:21,374 --> 00:24:34,994
We know that Meta is stealing more talent than the Lakers do in the off season in terms of
the number of AI experts they've poached from OpenAI as well as from ScaleAI.
245
00:24:35,158 --> 00:24:45,901
And so I think if you just look at how this race is going to develop, more and more large
enterprises are going to want to exercise more and more control over their models.
246
00:24:45,941 --> 00:24:49,142
And open sourcing just makes that far more feasible.
247
00:24:49,142 --> 00:25:01,285
um There's also been an evolution, I'd say, in the national security conversation around
open AI, or excuse me, an evolution in the national security conversation around open
248
00:25:01,285 --> 00:25:02,046
source.
249
00:25:02,046 --> 00:25:03,586
I think for a long time,
250
00:25:03,586 --> 00:25:14,674
there was a concern that open-sourcing models would lead to bad actors getting their hands
on those models sooner rather than later and using them for nefarious purposes.
251
00:25:14,954 --> 00:25:29,485
Following DeepSeek, which I guess is almost uh seven months old now, that DeepSeek moment
made a lot of people realize that the US moat with respect to peers and adversaries like
252
00:25:29,485 --> 00:25:31,686
China isn't as
253
00:25:31,754 --> 00:25:34,686
Extensive isn't as wide as previously imagined.
254
00:25:34,686 --> 00:25:48,524
And so if we can get more sophisticated AI tools like open source models in more hands, we
can collectively be a more savvy AI nation, a more uh thoughtful AI nation with respect to
255
00:25:48,524 --> 00:25:58,970
being able to test these models and probe them uh and use the whole of America's AI
expertise to make sure we are developing the most advanced and most sophisticated AI
256
00:25:58,970 --> 00:25:59,647
models.
257
00:25:59,647 --> 00:26:14,792
Yeah, know, shifting gears a little bit, taking everything you just said and then looking
at the legal industry, specifically big law, you know, I'm, I'm of the opinion that the
258
00:26:14,792 --> 00:26:23,901
future is the future for law firms is not a scenario where they buy ready-made off the
shelf tools.
259
00:26:23,955 --> 00:26:30,808
like Harvey and Legora that are great tools and not saying you shouldn't leverage those
tools, but they don't create differentiation.
260
00:26:30,849 --> 00:26:31,249
Right.
261
00:26:31,249 --> 00:26:37,292
If you're, if your competitor down the street can buy the same tools as you by definition,
there's no differentiation there.
262
00:26:37,292 --> 00:26:42,595
Now, how you, how you build workflows and how you use those tools can differentiate.
263
00:26:42,595 --> 00:26:53,951
But, you know, I'm of the belief that longer term, um, that law firms are going to have to
invest in strategies that leverage their data.
264
00:26:54,213 --> 00:27:00,579
and create solutions within their four walls using things like Azure OpenAI, Azure AI
Search.
265
00:27:00,579 --> 00:27:04,632
We're actually putting our chips on that part of the table ourselves here at InfoDash.
266
00:27:04,632 --> 00:27:12,608
We're an intranet and extranet company, but we have something called the integration hub
that we deploy that makes our product work.
267
00:27:12,709 --> 00:27:22,771
And it lives in the client's Azure tenant and it has tentacles into all the back office
systems and respects security trimming, ethical wall boundaries.
268
00:27:22,771 --> 00:27:27,865
And then that enables firms to tap in using Azure AI Search.
269
00:27:27,865 --> 00:27:34,100
If they want to crawl and index their practice management solution, we've enabled them to
do that.
270
00:27:34,100 --> 00:27:42,377
If they want to, we've got a labor and employment firm who has all of this amazing labor
and employment data that they compile for all 50 states.
271
00:27:42,377 --> 00:27:46,880
And they also have all of their clients, employment agreements, employee handbooks.
272
00:27:46,880 --> 00:27:49,482
And we're like, hey, wait minute, you got the ingredients here.
273
00:27:49,683 --> 00:27:52,755
Use our integration hub, tap into there, build an
274
00:27:52,755 --> 00:28:04,890
open Azure AI search and Azure Open AI, go flag all the exceptions and instead of your
clients having to log in and peruse the new regulatory updates in Wisconsin, you
275
00:28:04,890 --> 00:28:08,872
proactively go to them and say, hey, look, you've got exceptions and we can help you
remediate them.
276
00:28:08,872 --> 00:28:10,453
I see that as the future.
277
00:28:10,453 --> 00:28:10,873
I don't know.
278
00:28:10,873 --> 00:28:12,078
How do you view that?
279
00:28:12,078 --> 00:28:14,139
You know, am...
280
00:28:15,000 --> 00:28:22,904
The thing, I could scream from the uh rooftops or mountaintops you pick, would really be
doubling down on this data question.
281
00:28:22,904 --> 00:28:32,390
Because I think that folks are realizing that access to compute, even though it's
difficult, that's going to be something that's available.
282
00:28:32,390 --> 00:28:40,374
Access to the best algorithms, yes, we're going to see some people differentiate
themselves with respect to the efficiency of those algorithms.
283
00:28:40,586 --> 00:28:52,222
Access to talent, obviously a huge one as well, but when it comes to identifying narrow AI
use cases, that AI use cases that are going to have real practical, meaningful impact on
284
00:28:52,222 --> 00:28:57,095
businesses, on society, on government, it all comes back to quality data.
285
00:28:57,195 --> 00:29:07,060
And you and I had a conversation earlier about what's an analogy perhaps of some of the
misuse we're seeing in AI right now.
286
00:29:07,060 --> 00:29:10,348
And for me, it kind of goes back to this notion of
287
00:29:10,348 --> 00:29:11,878
a Model T car.
288
00:29:11,938 --> 00:29:21,361
You have this tool and if you're driving on streets in 1907 and you've got a Model T, are
you going to drive across the country?
289
00:29:21,361 --> 00:29:22,782
No, you just won't make it.
290
00:29:22,782 --> 00:29:24,922
There's not the proper infrastructure there.
291
00:29:24,922 --> 00:29:26,343
There's no gas stations.
292
00:29:26,343 --> 00:29:28,103
There's no highway system.
293
00:29:28,103 --> 00:29:32,364
Are you even going to be able to drive it across town reliably?
294
00:29:32,364 --> 00:29:33,535
Maybe not, right?
295
00:29:33,535 --> 00:29:35,585
It depends on the context.
296
00:29:35,685 --> 00:29:38,486
And when you have people right now taking
297
00:29:38,622 --> 00:29:44,864
You know, as you mentioned, just kind of generative AI tools readily available to the rest
of the competition.
298
00:29:44,864 --> 00:29:52,968
And you try to use that for your most sophisticated use case to tailor your best brief to
craft a really bespoke contract.
299
00:29:52,968 --> 00:29:58,710
It's going to fail unless you're training it on the best high quality data.
300
00:29:58,710 --> 00:30:00,160
And to your point,
301
00:30:00,426 --> 00:30:03,138
Large law firms, they're already leaning into this.
302
00:30:03,138 --> 00:30:15,597
They're working with the open AIs of the world to say, help us craft a proprietary version
of ChatGPT that's been trained specifically on our vast troves of data.
303
00:30:15,597 --> 00:30:27,805
If you think about some of these incredibly large law firms that have an international
presence, that have been in operations for decades, that have been creating contracts, uh
304
00:30:27,805 --> 00:30:29,216
thousands of them.
305
00:30:29,216 --> 00:30:31,928
a year, if not millions of them a year.
306
00:30:31,988 --> 00:30:42,776
The sheer quantity of that data is going to be a huge asset for them to be able to create
AI tools that uh give them a meaningful advantage over the competition.
307
00:30:42,776 --> 00:30:53,893
And that's arguably my biggest concern is that we're going to see the largest firms
continue to build a larger and larger advantage over those small mom and pop shops, for
308
00:30:53,893 --> 00:30:57,944
example, over those boutique law firms who they don't have.
309
00:30:57,944 --> 00:31:09,941
Thousands of contracts or millions of contracts to train a model on and so I'm a little
bit concerned about what the nature the competitive landscape of the legal Ecosystem looks
310
00:31:09,941 --> 00:31:11,521
like a few years from now
311
00:31:11,617 --> 00:31:13,317
Yeah, I mean, that's a great point.
312
00:31:13,317 --> 00:31:21,177
So I think that, um, it, know, there's a, there's a lot of dynamics in the legal
marketplace that are somewhat unique.
313
00:31:21,177 --> 00:31:23,437
First of all, it's extremely fragmented.
314
00:31:23,657 --> 00:31:28,377
the top five firms control less than 7 % of the market.
315
00:31:28,377 --> 00:31:33,017
It's about 350 billion ish in legal spend.
316
00:31:33,185 --> 00:31:38,168
And um the entire Amla 100 controls less than half.
317
00:31:38,168 --> 00:31:47,674
If you look at other industries, I use the big four because that's very, that's kind of
the closest analog is accounting um and audit.
318
00:31:47,674 --> 00:31:59,731
And they, the big four controlled 97 % of the audit work of all US public companies,
Completely different um concentration makeup there.
319
00:31:59,731 --> 00:32:09,526
So I look at, okay, the AMLAL today, extremely fragmented, very bespoke culture with, that
has not really embraced innovation historically.
320
00:32:09,526 --> 00:32:20,011
They're laggards, they're in a partnership model with a cash basis accounting that
prioritizes profit taking instead of capital expenditure and R &D.
321
00:32:20,011 --> 00:32:25,249
And I struggle to see on both ends, all really all ends of the spectrum, how
322
00:32:25,249 --> 00:32:28,889
How do we get to 2.0, big law and small law?
323
00:32:28,889 --> 00:32:29,189
I don't know.
324
00:32:29,189 --> 00:32:30,649
Do you have any thoughts on that?
325
00:32:30,680 --> 00:32:42,653
You know, I think the first place it has to start with is law schools, which is why I'm so
thrilled to be exactly where I am because we need to get more law school students who are
326
00:32:42,653 --> 00:32:45,834
increasingly thinking in an entrepreneurial lens, right?
327
00:32:45,834 --> 00:32:50,036
They've grown up in an era of move fast and break things.
328
00:32:50,036 --> 00:32:59,258
And increasingly, when I have new students come here to UT, they will have gone to
undergrad institutions that have enterprise level.
329
00:32:59,402 --> 00:33:00,563
AI accounts, right?
330
00:33:00,563 --> 00:33:10,188
They're going to have four years of experience, hopefully meaningful experience and not
just generating that essay at, you know, 1130 PM before the deadline, but some meaningful
331
00:33:10,188 --> 00:33:12,009
experience using these tools.
332
00:33:12,009 --> 00:33:20,134
And then they're going to come in to schools like UT and I'm going to be able to connect
them with companies like Rev here in Austin.
333
00:33:20,134 --> 00:33:27,406
Rev is developing transcription tools that can be used in depositions, for example, ah to
be able to
334
00:33:27,406 --> 00:33:31,146
pick up on new insights that were perhaps would otherwise go missed.
335
00:33:31,146 --> 00:33:43,406
I'm gonna be able to connect them with folks like Novo here in Austin that is changing the
workflow of personal injury attorneys and compiling medical documentation.
336
00:33:43,406 --> 00:33:54,106
And so if I can expose them to those tools as a 1L or a 2L or as a 3L, and they're the
sorts of folks who are thinking about that next generation of law, then they can be on the
337
00:33:54,106 --> 00:33:55,948
vanguard of shaping
338
00:33:55,948 --> 00:34:06,667
the law firms of the future because I really do believe that as much of a big advantage as
law firms may have right now, the largest law firms may have right now and as great of an
339
00:34:06,667 --> 00:34:21,239
advantage they may have with respect to data as we discussed, if you're a client and
someone comes to you and says, look, I've gotten rid of all of the waste, uh all of the
340
00:34:21,239 --> 00:34:25,056
rainmaker funds that you're going to pay if you're going to go with the biggest firm.
341
00:34:25,056 --> 00:34:33,762
and I am this agile, client-forward, AI-first law firm, I think I know who I want to go
with, right?
342
00:34:33,762 --> 00:34:34,913
And that's the bet.
343
00:34:34,913 --> 00:34:47,671
That's the thing we have to lean into as legal educators and as a whole legal ecosystem,
because I'm most excited by the potential for AI to really lower access to justice
344
00:34:47,671 --> 00:34:54,668
barriers, um the kind of thing that the legal community loves to hide and not
345
00:34:54,668 --> 00:34:58,560
discuss is that we have a huge access to justice gap.
346
00:34:58,560 --> 00:35:12,969
If you look at the recent California State Bar Report from 2024 analyzing how likely it
was that an individual who has a civil legal issue actually receives legal counsel, it's
347
00:35:12,969 --> 00:35:16,011
staggeringly low and it's problematically low.
348
00:35:16,011 --> 00:35:20,233
And so I think that the legal community has an obligation.
349
00:35:20,233 --> 00:35:21,390
If you look at our
350
00:35:21,390 --> 00:35:33,240
uh Rules of professional conduct whether it's the ABA model rules or a state's rules of
professional conduct every lawyer has an obligation to the quality of the justice system
351
00:35:33,520 --> 00:35:44,870
and quality has to mean that we provide everyone who has a right to defend a right to
assert with meaningful guidance and right now we just don't have enough lawyers, but we
352
00:35:44,870 --> 00:35:49,676
can meet that need or we can vastly expand our ability to meet that need with AI so
353
00:35:49,676 --> 00:36:01,766
That's where I get excited and that's where I really say if we have a next generation of
lawyers leaning into AI, they might manage to disrupt some of these uh really stodgy uh
354
00:36:01,766 --> 00:36:04,585
inertial dynamics of the legal marketplace.
355
00:36:04,585 --> 00:36:15,138
Yeah, and you know that that would eliminate uh a key lever if we really de lower the bar
for access to justice.
356
00:36:15,138 --> 00:36:22,340
A very common tactic is, you know, financial means, right?
357
00:36:22,340 --> 00:36:31,133
Like I know if I've got more dollars to spend on a legal proceeding than you do, that is
leverage for me.
358
00:36:31,133 --> 00:36:31,583
Right?
359
00:36:31,583 --> 00:36:32,393
So
360
00:36:33,178 --> 00:36:42,229
Having that dynamic diminished, think really changes the game and maybe produces better
outcomes.
361
00:36:42,382 --> 00:36:55,882
Yeah, and I think this should be a moment where the legal industry and the legal academy
looks at some of the systems and assumptions we've been making for almost 100 years and
362
00:36:55,882 --> 00:36:57,382
takes those head on.
363
00:36:57,422 --> 00:37:02,242
The federal rules of civil procedure were written in 1938.
364
00:37:03,182 --> 00:37:03,942
1938?
365
00:37:03,942 --> 00:37:08,554
That's almost a century ago, and we're still adhering to
366
00:37:08,554 --> 00:37:19,238
arbitrary deadlines that someone thought would be good, where it's still unsure of exactly
what you need to include in your complaint to survive a motion to dismiss.
367
00:37:19,278 --> 00:37:31,353
These are ludicrous, antiquated ways of thinking about how people should be able to assert
their rights in a country that really prizes itself on the rule of law and everyone being
368
00:37:31,353 --> 00:37:32,844
equal under the law.
369
00:37:32,844 --> 00:37:35,675
That's just not the case under these outdated systems.
370
00:37:35,675 --> 00:37:37,858
And so I'm optimistic that
371
00:37:37,858 --> 00:37:52,186
This is a time for creative thinking uh and for folks from across different disciplines to
come to lawyers and say, hey, let us help you revise uh these norms and these rules so
372
00:37:52,186 --> 00:37:54,249
that you can better fulfill your purpose.
373
00:37:54,249 --> 00:38:04,499
Yeah, you know, and along those lines, the it's obviously it's going to change the way of
that law firms price, right?
374
00:38:04,499 --> 00:38:05,801
Their pricing strategies.
375
00:38:05,801 --> 00:38:08,853
And you're seeing some really interesting challenging firms in the UK.
376
00:38:08,853 --> 00:38:18,813
You have Garfield Law that's it is a AI uh first or maybe AI only kind of small claims.
377
00:38:19,365 --> 00:38:25,208
I don't know if they're a tech company or a law firm, you know, the rules are different
over there with the Legal Services Act.
378
00:38:25,208 --> 00:38:29,051
And now you have Crosby AI here in the US.
379
00:38:29,051 --> 00:38:32,092
It's a really interesting time to be a challenger firm.
380
00:38:32,092 --> 00:38:40,817
But you know, whenever I hear and I talk a lot, in fact, I would just attended a
conference uh inside practice event in New York on pricing.
381
00:38:40,817 --> 00:38:47,881
It's actually financial management and innovation, but we talked a lot about pricing and
um
382
00:38:47,881 --> 00:38:57,666
You know, a lot of people like to throw up concepts that sound good, like outcome based
pricing and value based pricing.
383
00:38:57,666 --> 00:39:03,428
know, I think, yes, that makes sense to me, but there's, there's challenges with that.
384
00:39:03,428 --> 00:39:04,609
So here in St.
385
00:39:04,609 --> 00:39:12,592
Louis, where I live, all the plumbing companies, I don't know if they've banded together,
but they've decided that they are no longer doing time and materials work.
386
00:39:12,592 --> 00:39:14,523
They only do flat fee work.
387
00:39:14,527 --> 00:39:18,821
and they will not give you a breakdown of labor versus materials.
388
00:39:18,841 --> 00:39:29,331
And as a consumer, that creates um a opaque uh veil between me and my ability to see if
I'm getting a fair deal.
389
00:39:29,331 --> 00:39:38,300
um But uh I had some work done in my basement, and they came in, and I had a leak in a
sewer line.
390
00:39:39,073 --> 00:39:48,300
You know, I sat back and thought about it like, okay, what is it worth to me to not have
my basement, my sewer flood, my base, quite a lot, but that's not, I'm not going to base
391
00:39:48,300 --> 00:39:53,944
my willingness to pay a price based on that value or that outcome.
392
00:39:53,944 --> 00:39:56,926
It still comes back to supply and demand, right?
393
00:39:56,926 --> 00:40:05,392
In other words, if I can find another plumber to deliver the same outcome for less money,
then I'm going that direction.
394
00:40:05,392 --> 00:40:07,041
You can't say, well, it's worth
395
00:40:07,041 --> 00:40:08,621
So my basement did flood.
396
00:40:08,621 --> 00:40:10,741
cost me about 45 grand.
397
00:40:10,901 --> 00:40:21,741
Um, I had some insurance, but, um, uh, so that offset some of it, but so I know the exact
cost of, of a flood down there, but I'm, you know, they can't say, well, it's going to be
398
00:40:21,741 --> 00:40:22,721
15%.
399
00:40:22,721 --> 00:40:24,021
That's a fair price.
400
00:40:24,021 --> 00:40:27,921
Like in the legal world, I look at it like that, like, yes, okay.
401
00:40:27,981 --> 00:40:34,667
The value that you're delivering and the outcome that you may be preventing or enabling
does have a dollar figure.
402
00:40:34,667 --> 00:40:40,728
But you being able to charge a portion of that is also influenced by supply and demand.
403
00:40:40,728 --> 00:40:41,969
So I don't know.
404
00:40:42,557 --> 00:40:45,840
How do you see that in pricing situation?
405
00:40:45,840 --> 00:40:56,733
you know, the pricing one, I'll say leaning into my my earlier comment, I'd say it's not
my area of expertise in terms of thinking through how this will exactly change kind of
406
00:40:56,733 --> 00:40:58,574
those firm pricing tactics.
407
00:40:58,574 --> 00:41:10,557
But I will agree with you that I think it is so essential that we use this moment to get
back to first principles about what is it that we're actually trying to achieve with our
408
00:41:10,557 --> 00:41:11,877
justice system.
409
00:41:11,937 --> 00:41:15,398
And if it's just getting money out of the litigants.
410
00:41:15,788 --> 00:41:17,319
That's a problem, right?
411
00:41:17,319 --> 00:41:23,983
And I think we need to really use this moment to explore ideas like regulatory sandboxes.
412
00:41:23,983 --> 00:41:36,669
So talking earlier about my encouragement and advocacy for sunset clauses and for
retrospective review, that should be the case in the legal industry as well and how we
413
00:41:36,669 --> 00:41:37,970
govern ourselves.
414
00:41:37,970 --> 00:41:45,272
So I want to see more states uh actually have some degree of experimentation with how is
this new
415
00:41:45,272 --> 00:41:54,787
tool being used, how is this new pricing system being used, who's implicated, who's not
litigating their claims, who's litigating too many claims.
416
00:41:54,787 --> 00:42:02,681
All of this should be tracked, monitored, analyzed, shared, and used as the basis to
inform our rules going forward.
417
00:42:02,681 --> 00:42:07,194
But we're not a very empirically savvy profession, right?
418
00:42:07,194 --> 00:42:15,148
The fact that tech justice and tech law is something that seemingly appeared a decade or
so ago or two decades ago.
419
00:42:15,158 --> 00:42:19,800
is pretty indicative of a profession that's been around arguably since the beginning of
time.
420
00:42:20,080 --> 00:42:26,083
So, you know, maybe we could improve the extent to which we're trying to really monitor
how we're doing.
421
00:42:26,083 --> 00:42:31,425
And I hope there is some experimentation here because the stakes are so high to your
point, Ted.
422
00:42:31,425 --> 00:42:42,750
And what I think is also going to be uh something that I think will also happen that we
should keep our eye on is how is the private sector changing the way it adjudicates its
423
00:42:42,750 --> 00:42:43,980
own claims?
424
00:42:44,034 --> 00:42:54,060
So how are we going to see businesses, for example, start to negotiate with one another
rather than going to the typical public justice system?
425
00:42:54,060 --> 00:43:02,184
They're going to start sending over disputes and claims to AI judges and to AI
adjudication systems.
426
00:43:02,184 --> 00:43:02,894
Why?
427
00:43:03,205 --> 00:43:12,680
Well, rather than waiting for months or years for that dispute to be resolved, they're
just going to outsource it to an agreed upon AI system.
428
00:43:13,000 --> 00:43:22,786
And we should actually pay a lot of attention to how those systems are working and whether
in certain contexts they may be appropriate to use to resolve some public disputes as
429
00:43:22,786 --> 00:43:23,455
well.
430
00:43:23,455 --> 00:43:25,086
Yeah, that makes a lot of sense.
431
00:43:25,086 --> 00:43:27,056
We only have a couple of minutes left, but I want it.
432
00:43:27,056 --> 00:43:32,228
I want you uh to touch on a topic that you wrote about that I find really interesting.
433
00:43:32,228 --> 00:43:35,889
And that's around like knowledge diffusion and AI literacy.
434
00:43:35,889 --> 00:43:41,731
And I know that's probably we could spend the whole episode just talking about that, but
it's such an interesting topic.
435
00:43:41,731 --> 00:43:49,833
Like, can you give us a Reader's Digest version of what you of what that means and how it
impacts AI literacy?
436
00:43:50,188 --> 00:43:52,829
Yeah, so let's imagine a hypothetical.
437
00:43:52,829 --> 00:43:57,141
I'm a law professor after all, so I have to throw out a hypo every now and again.
438
00:43:57,141 --> 00:44:00,442
Let's say tomorrow we get AGI.
439
00:44:00,442 --> 00:44:14,018
OpenAI says, we've announced the most sophisticated AI tool capable of detecting cancer at
100 % accuracy, capable of tutoring everyone according to their learning style and
440
00:44:14,018 --> 00:44:15,128
learning abilities.
441
00:44:15,128 --> 00:44:16,909
All of that's available tomorrow.
442
00:44:17,509 --> 00:44:19,870
I don't think we'd actually make a ton of use of it.
443
00:44:20,236 --> 00:44:20,606
Right?
444
00:44:20,606 --> 00:44:27,471
If it came about tomorrow, we'd have the American Medical Association would want to kick
the tires of that AI.
445
00:44:27,471 --> 00:44:34,255
We'd have parent-teacher associations that would want to thoroughly vet any implementation
of that AI.
446
00:44:34,255 --> 00:44:37,698
School districts, state bars, as we've talked about.
447
00:44:37,698 --> 00:44:38,778
You name the profession.
448
00:44:38,778 --> 00:44:42,400
You name all of these different barriers and frictions.
449
00:44:42,561 --> 00:44:45,062
In many cases, I think those are appropriate.
450
00:44:45,270 --> 00:44:56,475
We should have a degree of skepticism of making sure that before we introduce these AI
tools into really sensitive, really important use cases, let's make sure we're vetting
451
00:44:56,475 --> 00:44:56,695
them.
452
00:44:56,695 --> 00:44:59,956
Let's make sure we understand what we're about to proceed with.
453
00:45:00,537 --> 00:45:13,802
How we do that vetting and whether that vetting is actually successful and rational and
not based off of uh skepticism or fear or concerns about, uh you know,
454
00:45:14,050 --> 00:45:18,292
black swan events where the whole of society gets turned into paper clips.
455
00:45:18,332 --> 00:45:21,073
That's contingent upon AI literacy.
456
00:45:21,293 --> 00:45:25,615
Do folks have enough of an understanding of how the technology works?
457
00:45:25,615 --> 00:45:34,018
Do they have enough experience with the technology to know its best limitations or excuse
me, to know its limitations and its best use cases?
458
00:45:34,019 --> 00:45:39,741
Do they have a willingness to experiment with that technology in really important cases?
459
00:45:39,941 --> 00:45:42,456
If the answer is no to those questions,
460
00:45:42,456 --> 00:45:47,488
then it doesn't matter if America is the first to achieve AGI, right?
461
00:45:47,488 --> 00:46:00,012
That's my big concern about the lack of emphasis we've placed on knowledge diffusion
because right now uh we know that China, for example, is investing heavily in increasing
462
00:46:00,012 --> 00:46:03,633
the number of PhDs with expertise in AI.
463
00:46:03,633 --> 00:46:07,294
We know that other countries are actively trying to solicit.
464
00:46:07,338 --> 00:46:15,683
as many AI experts as possible to move to their country and to lend their expertise to
their governments, to their businesses, to their schools.
465
00:46:15,683 --> 00:46:23,126
Estonia has a mandate for all of their public school students to be exposed to AI.
466
00:46:23,787 --> 00:46:26,988
Where do we see that sort of vision here in the States?
467
00:46:27,089 --> 00:46:33,932
We've yet to have meaningful, uh for example, what I've called for an AI education core.
468
00:46:33,932 --> 00:46:37,314
Why aren't we using our community colleges, for instance,
469
00:46:37,314 --> 00:46:45,639
to help train and deploy folks who can then go to small businesses in their community and
say, here's an AI tool that would really help you out.
470
00:46:45,639 --> 00:46:48,541
And let me help you integrate that into your small business.
471
00:46:48,541 --> 00:46:58,467
We can have public libraries serve as hubs for AI companies to come do demonstrations for
people to learn about the latest and greatest AI.
472
00:46:58,467 --> 00:47:00,438
These steps are really important.
473
00:47:00,438 --> 00:47:03,540
And for listeners who are thinking, OK, well,
474
00:47:03,544 --> 00:47:11,916
You know, this all sounds nice and yeah, it would be excellent if we could diffuse all
this and uh increase the general level of AI literacy.
475
00:47:12,136 --> 00:47:18,058
I encourage those folks who are maybe a little skeptical to go read the work of Jeffrey
Ding.
476
00:47:18,058 --> 00:47:24,019
Jeffrey Ding is an economist and he's studied this diffusion question closely.
477
00:47:24,020 --> 00:47:31,131
And in the context of the Cold War, it was often the USSR who was the first to innovate,
right?
478
00:47:31,131 --> 00:47:33,132
They were the first to get to Sputnik.
479
00:47:33,132 --> 00:47:39,725
For example, they made a lot of early advances on weapon systems that we were lagging
behind.
480
00:47:39,826 --> 00:47:41,247
Why did we win?
481
00:47:41,247 --> 00:47:53,313
Well, we had more engineers, we had more scientists, we had more general expertise so that
we could turn those innovations into actual progress, into actual tangible goods and
482
00:47:53,313 --> 00:47:56,015
services in a much faster fashion.
483
00:47:56,015 --> 00:48:01,986
so knowledge diffusion really is the key to turning innovation into progress.
484
00:48:01,986 --> 00:48:04,294
And we need to place a greater emphasis on that.
485
00:48:04,839 --> 00:48:06,129
I couldn't agree more.
486
00:48:06,129 --> 00:48:13,052
I love the community college um idea and the public library idea that you pose.
487
00:48:13,052 --> 00:48:18,034
And I would say, let's start with some knowledge diffusion among the legislators.
488
00:48:18,034 --> 00:48:22,375
ah The ones making the rules, you know?
489
00:48:22,375 --> 00:48:30,121
not only them, but I'd also not be a good academic if I didn't uh err on being a little
self-promotional.
490
00:48:30,121 --> 00:48:38,747
I wrote a whole law review article called, what it was like, an F in judicial education.
491
00:48:38,747 --> 00:48:47,872
And it's all about how if you go talk to state judges, they're not getting recurring
meaningful education on the latest technology.
492
00:48:47,872 --> 00:48:58,329
If you go talk to Supreme Court judges on various state Supreme Courts, it's not like they
go get a briefing from OpenAI about how AI works.
493
00:48:58,329 --> 00:49:06,344
Like the rest of us, they're just trying to figure it out by doing some Googling or
perplexity searches, I guess now, or trying to hope that their clerks have learned about
494
00:49:06,344 --> 00:49:07,155
AI.
495
00:49:07,155 --> 00:49:12,658
That's not a really reliable, good strategy for a high-quality justice system.
496
00:49:12,831 --> 00:49:13,321
No doubt.
497
00:49:13,321 --> 00:49:20,204
had a judge on, um, God, must've been six months ago, Judge Scott Schlegel, um, in
Louisiana.
498
00:49:20,204 --> 00:49:31,129
And he, he gave a really good assessment of just the state of the judicial system, um,
technology-wide, not just AI specifically and their inability in their, in their lack of
499
00:49:31,129 --> 00:49:32,290
readiness around.
500
00:49:32,290 --> 00:49:39,583
Um, he works a lot with domestic violence cases and you know, the ability to use deep fake
technology.
501
00:49:39,687 --> 00:49:43,451
on both sides of the equation and just the risks around that.
502
00:49:43,451 --> 00:49:46,836
And it was like, a good episode.
503
00:49:46,836 --> 00:49:47,927
it's wild.
504
00:49:47,927 --> 00:49:59,458
And I think that the more we continue to see schools like UT, uh schools like Vanderbilt,
lean into AI and try to make sure the next generation is AI literate and achieving that
505
00:49:59,458 --> 00:50:04,293
sort of knowledge diffusion among key professionals, the better we can serve everyone.
506
00:50:04,293 --> 00:50:07,125
mean, the same goes for doctors as well, right?
507
00:50:07,125 --> 00:50:10,348
Do you want a doctor who doesn't trust?
508
00:50:10,816 --> 00:50:22,442
radiological AI tools despite them having 99 or 95 degree accuracy or far greater accuracy
than the human equivalent, I'd rather go to the AI doctor, right?
509
00:50:22,442 --> 00:50:25,331
So we need this across so many professions.
510
00:50:25,331 --> 00:50:26,884
Yeah, no, that's a great point.
511
00:50:26,884 --> 00:50:29,217
Well, this has been a great conversation.
512
00:50:29,217 --> 00:50:32,412
How to tell our listeners how to find out more.
513
00:50:32,412 --> 00:50:34,094
It sounds like you got a podcast.
514
00:50:34,094 --> 00:50:34,975
What's the name of it?
515
00:50:34,975 --> 00:50:36,277
How do they find your writing?
516
00:50:36,277 --> 00:50:38,338
How do they and how do they connect with you?
517
00:50:38,338 --> 00:50:39,038
Yeah, yeah.
518
00:50:39,038 --> 00:50:50,060
So if you want to listen to scaling laws, if you're interested in AI policy, AI
governance, check out scaling laws should be available on all podcast sites that you go
519
00:50:50,060 --> 00:50:50,943
to.
520
00:50:50,943 --> 00:51:01,648
If you want my own musings on AI, I write on sub stack at Appleseed AI, like Johnny
Appleseed, trying to spread the word, trying to diffuse some AI knowledge.
521
00:51:01,648 --> 00:51:06,549
And then you can always find me on X and Blue Sky at Kevin T.
522
00:51:06,549 --> 00:51:07,570
Frazier.
523
00:51:08,226 --> 00:51:14,633
Yeah, really appreciate the opportunity to talk with you Ted and hope we can do this again
because this was a hoot and a half.
524
00:51:14,633 --> 00:51:20,843
Yeah, we, I don't think we got to half of the agenda topics that we were talking about,
but it was a great discussion nonetheless.
525
00:51:20,843 --> 00:51:26,992
So, um, listen, have a great holiday weekend and, I look forward to the next conversation.
526
00:51:27,042 --> 00:51:29,198
Thank you and yeah, hope to see you in St.
527
00:51:29,198 --> 00:51:29,909
Louis sometime.
528
00:51:29,909 --> 00:51:31,311
That sounds great.
529
00:51:31,894 --> 00:51:32,415
All right.
530
00:51:32,415 --> 00:51:33,556
Thanks, Kevin.
531
00:51:33,986 --> 00:51:34,970
Thank you.
00:00:05,128
Kevin Frazier, how are you today?
2
00:00:05,506 --> 00:00:06,216
Doing well, Ted.
3
00:00:06,216 --> 00:00:07,423
Thanks for having me on.
4
00:00:07,423 --> 00:00:08,794
Yeah, I'm excited.
5
00:00:08,794 --> 00:00:21,101
is, um you and I had a conversation, couple of, actually it was this week, and talked
about some of the new AI regulation that was pending and we're gonna discuss the outcome.
6
00:00:21,202 --> 00:00:24,724
And today is July 3rd.
7
00:00:24,724 --> 00:00:27,325
So, and I think this episode is gonna get released next week.
8
00:00:27,325 --> 00:00:29,266
So this will be very timely information.
9
00:00:29,266 --> 00:00:32,819
um But before we get into that, let's get you introduced.
10
00:00:32,819 --> 00:00:33,779
You're a...
11
00:00:33,875 --> 00:00:37,455
AI researcher and um an academic.
12
00:00:37,455 --> 00:00:41,435
Why don't you tell us a little bit about who you are, what you do, and where you do it.
13
00:00:41,474 --> 00:00:50,778
Yeah, so I'm based here in Austin, land of tacos, bats, and now the AI Innovation and Law
program here at the University of Texas School of Law.
14
00:00:50,778 --> 00:00:57,501
So I'm the school's inaugural AI Innovation and Law fellow, which is super exciting.
15
00:00:57,501 --> 00:01:09,186
So I get to help make sure that all of the students here at UT are AI literate and ready
to go into the legal practice, knowing the pros and cons of AI and how best to help their
16
00:01:09,186 --> 00:01:10,018
clients.
17
00:01:10,018 --> 00:01:13,770
And also to contribute to some of these important policy conversations.
18
00:01:13,770 --> 00:01:21,564
So my background is uh doing a little bit of everything in the land of emerging tech
policy.
19
00:01:21,564 --> 00:01:23,745
So I worked for Google for a little stint.
20
00:01:23,745 --> 00:01:27,027
um I've worked for the government of Oregon.
21
00:01:27,027 --> 00:01:29,649
I was a clerk on the Montana Supreme court.
22
00:01:29,649 --> 00:01:31,069
I taught law at St.
23
00:01:31,069 --> 00:01:32,830
Thomas University college of law.
24
00:01:32,830 --> 00:01:40,014
And I did some research for a group called the Institute for law and AI, but now I get to
spend my full time here at UT.
25
00:01:40,130 --> 00:01:48,002
teaching AI, writing about AI, and like you, podcasting about AI for a little podcast
called Scaling Law.
26
00:01:48,002 --> 00:01:51,106
So like you, I can't get enough of this stuff.
27
00:01:51,137 --> 00:01:51,978
Absolutely, man.
28
00:01:51,978 --> 00:01:53,039
I'm I'm jealous.
29
00:01:53,039 --> 00:02:00,007
I wish this is like a very part-time gig for me Like I still have a day job, but your day
job sounds awesome uh
30
00:02:00,007 --> 00:02:01,610
can't believe I get to do this.
31
00:02:01,610 --> 00:02:03,334
It's the best job ever.
32
00:02:03,334 --> 00:02:11,489
And hopefully you find me Ted buried here outside the law school and I will be a my
tombstone will read he did what he was excited by.
33
00:02:11,489 --> 00:02:13,329
That's good stuff.
34
00:02:13,909 --> 00:02:23,509
Well, I guess before we jump into the agenda, I'm encouraged to hear that law schools are
really moving in this direction.
35
00:02:23,589 --> 00:02:35,209
I saw a stat from the ABA that I think was in December that said around, it was just over
50 % of law schools even had a formal AI course.
36
00:02:35,989 --> 00:02:38,629
So I've had many.
37
00:02:38,933 --> 00:02:54,316
professors on the podcast and we have commiserated over really the lack of preparedness
that, you know, new law grads um have when it comes to really understanding the
38
00:02:54,316 --> 00:02:55,207
technology.
39
00:02:55,207 --> 00:03:06,135
And, you know, we also have a dynamic within the industry itself where, you know,
historically clients have subsidized new associate training, you know, through, um you
40
00:03:06,135 --> 00:03:08,497
know, the, the, the mentorship.
41
00:03:08,673 --> 00:03:15,388
program that uh Big Law has for new associate development.
42
00:03:15,388 --> 00:03:19,500
So it's really encouraging to hear that this is taking place.
43
00:03:19,906 --> 00:03:25,579
Yeah, no, I couldn't be more proud of the UT system as a whole leaning into AI.
44
00:03:25,579 --> 00:03:37,116
Actually, last year here in Austin was the so-called Year of AI, where the entire campus
was committed to addressing how are we going to adjust to this new technological age.
45
00:03:37,116 --> 00:03:47,412
here at the law school, Dean Bobby Chesney has made it clear that as much attention as the
Harvards get, the Stamfords get, the NYUs get,
46
00:03:47,466 --> 00:03:56,634
Austin's really a spot where if you want to go find a nexus of policymakers, venture
capitalists, and AI developers, you're going to find them in Austin.
47
00:03:56,634 --> 00:04:07,142
And so this is really a spot that students can come to, scholars can come to, community
members can come to, and find people who are knowledgeable about AI.
48
00:04:07,142 --> 00:04:12,967
And I think critically, something that you and I discussed earlier, curious about AI.
49
00:04:12,967 --> 00:04:16,834
One of my tired lines, my wife, if she ever listens to this,
50
00:04:16,834 --> 00:04:19,275
will say, my gosh, you said it again.
51
00:04:19,414 --> 00:04:21,657
I have never met an AI expert.
52
00:04:21,657 --> 00:04:29,223
And in fact, if I meet an AI expert, that's the surest sign that they're not because this
technology is moving too quickly.
53
00:04:29,223 --> 00:04:30,523
It's too complex.
54
00:04:30,523 --> 00:04:37,658
And anyone who thinks they have their entire head wrapped uh around this is just full of
hooey, in my opinion.
55
00:04:37,658 --> 00:04:46,402
And so it's awesome to be in a spot where everyone is committed to working in an
interdisciplinary fashion and a practical fashion, to your point.
56
00:04:46,402 --> 00:04:49,695
so that they leave the law school practice ready.
57
00:04:49,695 --> 00:04:56,830
Yeah, and I mean, to your point about, you know, no AI experts, the Frontier Labs don't
even know really how these models work.
58
00:04:56,830 --> 00:05:09,509
I think Anthropic has done uh probably the best job of all the Frontier Labs really
digging in and creating transparency around how these models really work, their inner
59
00:05:09,509 --> 00:05:12,551
workings and how they get to their output.
60
00:05:12,551 --> 00:05:18,156
But yeah, I mean, these things are still a bit of a black box, even for the people who
created them.
61
00:05:18,156 --> 00:05:18,796
Right.
62
00:05:18,796 --> 00:05:30,075
no, I've had wonderful conversations with folks like Joshua Batson at Anthropic, who was
one of the leading researchers on their mechanistic interoperability report, where they
63
00:05:30,075 --> 00:05:35,779
went and showed, for example, that their models weren't just looking at the next best
word.
64
00:05:35,779 --> 00:05:44,796
That's kind of the usual way we like to try to dumb down LLMs is to just say, oh, you
know, they're just looking at the next best word based off of this distribution of
65
00:05:44,796 --> 00:05:45,876
training data.
66
00:05:45,976 --> 00:05:55,624
But if you go read that report and they write it in accessible language and it is
engaging, it is a little lengthy, but you know, maybe throw it into notebook LOM and, you
67
00:05:55,624 --> 00:05:57,655
know, make that a little easier.
68
00:05:57,776 --> 00:06:04,661
But you see these models are actually when you ask them to write you a poem, they're
working backwards, right?
69
00:06:04,661 --> 00:06:13,288
They know what word they're going to end a sentence with and they start thinking through,
okay, how do I make sure I tee myself up to get this rhyming pattern going?
70
00:06:13,288 --> 00:06:16,000
And that level of sophistication is just
71
00:06:16,000 --> 00:06:16,944
scraping the surface.
72
00:06:16,944 --> 00:06:22,537
There's so much beneath this iceberg and it's a really exciting time to be in this space.
73
00:06:22,537 --> 00:06:34,668
Yeah, and you know, they've also been transparent around the um not so desirable human
characteristics like deception that these LLMs exhibit.
74
00:06:34,668 --> 00:06:49,473
And I think that's also a really important aspect for people to understand for users of
the system so they can have awareness around the possibilities and really have a lens um
75
00:06:49,473 --> 00:06:53,733
Yeah, a little bit of a healthy skepticism about what's being presented.
76
00:06:53,793 --> 00:06:56,033
it's, they've done a fantastic job.
77
00:06:56,033 --> 00:06:57,473
I'm a big anthropic fan.
78
00:06:57,473 --> 00:07:04,753
use, you know, it's Claude, Gemini and Chad GBT are my go-tos and I use them all for
different things.
79
00:07:04,753 --> 00:07:08,653
But, you know, I will, I probably use Claude the least.
80
00:07:08,653 --> 00:07:10,693
I'm doing a lot more with Gemini now.
81
00:07:10,693 --> 00:07:12,773
Gemini is blowing my mind.
82
00:07:12,793 --> 00:07:17,237
But I will continue to support them with my $20 a month.
83
00:07:17,237 --> 00:07:22,503
because I just love the work that they're doing and really appreciate all the transparency
they're creating.
84
00:07:22,626 --> 00:07:26,879
think their writing with Claude is just incredible.
85
00:07:26,879 --> 00:07:36,895
To be able to tell Claude, for example, what style of writing you want to go forward with
and to be able to train it to focus on your specific writing style is exciting.
86
00:07:36,895 --> 00:07:42,798
But to your point, it's also key to just have folks know what are the key limitations.
87
00:07:42,798 --> 00:07:49,402
So for example, sycophancy has become a huge concern across a lot of these models.
88
00:07:49,794 --> 00:07:55,618
Favorite example is you can go in and say, hey, write in the style of the Harvard Law
Review.
89
00:07:55,658 --> 00:08:04,124
And for folks who aren't in the uh legal scholarship world, obviously getting anything
published by the Harvard Law Review would be wildly exciting.
90
00:08:04,124 --> 00:08:09,157
You'll enter some text and you'll say, all right, give me some feedback from the
perspective of the Harvard Law Review.
91
00:08:09,157 --> 00:08:13,320
And oftentimes you'll get, my gosh, this is excellent.
92
00:08:13,320 --> 00:08:16,472
There is no way the Law Review can turn you down.
93
00:08:16,472 --> 00:08:19,614
And I think you've nailed it on the head, but.
94
00:08:19,650 --> 00:08:25,277
When you have that sophistication to be able to know, okay, it may be a little
sycophantic, I can press it though, though.
95
00:08:25,277 --> 00:08:29,442
I can nudge it to be more of a harsh critic.
96
00:08:29,442 --> 00:08:39,617
And once you have that level of literacy, these tools really do have just so much
potential to transform your professional and personal uh approach to so many tasks.
97
00:08:39,617 --> 00:08:43,537
Didn't OpenAI roll back 4.5 because of this?
98
00:08:43,640 --> 00:08:44,841
Too nice, too nice.
99
00:08:44,841 --> 00:08:48,514
was too, yeah, just giving everyone too many good vibes.
100
00:08:48,514 --> 00:09:01,335
And I think that speaks to the fact that there is always going to be some degree of a role
for a human, especially in key relationships where you have mentors, where you have close
101
00:09:01,335 --> 00:09:05,548
companions, where you have loved ones who are able to tell you the hard truth.
102
00:09:05,548 --> 00:09:07,360
That's what makes a good friend, right?
103
00:09:07,360 --> 00:09:13,094
And a good teacher and a good uh partner is they can call you out on your BS.
104
00:09:13,450 --> 00:09:19,315
AI, it's harder, it's proven a little bit more difficult to make them uh more
confrontational.
105
00:09:19,315 --> 00:09:20,815
Yeah, 100%.
106
00:09:20,815 --> 00:09:27,210
Well, when we spoke earlier in the week, there was some pending legislation that you and I
talked about that I thought was super interesting.
107
00:09:27,251 --> 00:09:39,449
And the implications are, you know, um really hard to put words around, you know, had that
piece of legislation, that part of the legislation passed.
108
00:09:39,449 --> 00:09:42,361
And that was um
109
00:09:43,086 --> 00:09:51,475
I'll let you explain it because you're much closer to it, but it was essentially a 10-year
moratorium around state-level legislation around AI.
110
00:09:51,475 --> 00:09:56,620
Tell us a little bit about what was proposed and then ultimately where it landed.
111
00:09:56,920 --> 00:10:11,041
Yeah, so as part of the one big, beautiful budget bill, we saw in the House version of
that bill a 10-year moratorium on a wide swath of state AI regulations.
112
00:10:11,182 --> 00:10:23,091
And the inclusion of that language was really out of a concern that we could see, like we
have in the privacy space, a sort of patchwork approach to a key area of law.
113
00:10:23,111 --> 00:10:26,784
And if you go do economic analysis and look at
114
00:10:26,798 --> 00:10:36,224
Who is most implicated by California having one set of privacy standards and New York
having a different set and Virginia having its own and Washington having its own?
115
00:10:36,224 --> 00:10:38,005
Who does that actually impact?
116
00:10:38,005 --> 00:10:49,271
Well, in many cases, it tends to be small and medium sized businesses because they don't
have huge compliance offices, for example, or even the businesses that are just nearing
117
00:10:49,271 --> 00:10:52,653
the threshold of being implicated by those privacy laws.
118
00:10:52,653 --> 00:10:54,126
They too have to start
119
00:10:54,126 --> 00:11:02,900
hiring outside counsel, they have to be monitoring what their employees are doing to make
sure they comply with the nuances of each of these state bills.
120
00:11:02,900 --> 00:11:10,663
And so a lot of folks are concerned that we may see a similar patchwork apply in the AI
context.
121
00:11:10,663 --> 00:11:21,068
If every state is thinking through how are we gonna regulate AI differently, how do we
define AI has even proven to be a difficult challenge among state legislators.
122
00:11:21,110 --> 00:11:29,316
And so we saw the house say, all right, we're going to move forward with a 10 year
moratorium on specific state AI regulation.
123
00:11:29,316 --> 00:11:35,340
Now it's important to note that the language in the house bill was wildly unclear.
124
00:11:35,340 --> 00:11:43,286
I'm not sure who wrote the legislation, uh but yeah, you know, they could have used some
help from the drafting office.
125
00:11:43,286 --> 00:11:49,570
It was, it was a bit uh unfortunate because that muddled language added a lot of confusion
about
126
00:11:49,570 --> 00:11:54,553
how that moratorium would work in practice, and what state laws would actually be
implicated.
127
00:11:54,553 --> 00:12:08,981
The thing that the proponents of this moratorium were aiming for was that there would be a
ban or a pause on state regulation that was specific to AI.
128
00:12:08,981 --> 00:12:17,846
And so this was really out of a concern that, again, we would have uh myriad standards,
myriad definitions applying to AI development itself.
129
00:12:17,912 --> 00:12:28,805
but it didn't want to capture some of the general consumer protection laws that we know
are so important to uh making sure everyone can, for example, buy a home without being
130
00:12:28,805 --> 00:12:38,128
discriminated against, be hired or fired without being discriminated against, prevent
businesses from using unfair or deceptive business practices.
131
00:12:38,128 --> 00:12:41,648
So that was the kind of background of the house language.
132
00:12:41,689 --> 00:12:46,930
Well, as with all bills, we saw the house language then move into the Senate.
133
00:12:47,014 --> 00:12:59,311
And the Senate saw a pretty crazy, I think that's the only word that can be used to
describe this, a pretty crazy debate occur between Senator Cruz, who was one of the main
134
00:12:59,311 --> 00:13:11,047
proponents of the moratorium, and Senator Marsha Blackburn from Tennessee, who had
concerns that the moratorium might prohibit enforcement of the Elvis Act.
135
00:13:11,047 --> 00:13:16,960
Now, the Elvis Act is one of these AI specific laws that the Tennessee legislature passed.
136
00:13:16,962 --> 00:13:27,928
with a specific goal of making sure that uh the creators, the musicians, all those folks
we associate with Nashville and Tennessee would have their name, image, and likeness
137
00:13:27,928 --> 00:13:37,273
protected as a result of perhaps training on their music uh and even producing deep fakes
of their songs and things like that.
138
00:13:37,273 --> 00:13:43,817
So there was a debate and a compromise was reached between Senator Blackburn and Senator
Cruz.
139
00:13:43,817 --> 00:13:46,918
They reduced it to a five-year moratorium.
140
00:13:46,946 --> 00:13:55,830
They made sure that the language of the moratorium was compliant with some procedural
hurdles, which is a whole nother can of worms.
141
00:13:55,830 --> 00:14:04,334
Basically, if you have a budget bill, there has to be a budgetary ramification of the
language in each provision of that budget bill.
142
00:14:04,334 --> 00:14:11,117
So now the moratorium was connected to uh broadband funds and AI deployment funds.
143
00:14:11,117 --> 00:14:14,918
And so all of sudden, we just got this really crazy
144
00:14:14,968 --> 00:14:17,681
combination of ideas and concerns.
145
00:14:17,681 --> 00:14:27,649
And ultimately the Senate decided by a vote of 99 to one to just strip that language out
of the one big beautiful bill.
146
00:14:27,649 --> 00:14:34,596
So as it stands, we continue to have Congress grappling with how best to proceed.
147
00:14:34,596 --> 00:14:42,252
Congress has really only enacted one AI specific law, the Take It Down Act, which pertains
to deep fakes.
148
00:14:42,498 --> 00:14:46,822
But besides that, we're still left asking, what is our national vision for AI?
149
00:14:46,822 --> 00:14:51,486
Where are we going to go with this huge regulatory issue?
150
00:14:51,747 --> 00:14:56,491
And in that sort of regulatory void, we now have 50 states.
151
00:14:56,491 --> 00:14:59,694
Across those states, there are hundreds of AI bills.
152
00:14:59,694 --> 00:15:05,670
Depending on who you ask, it's anywhere from 100 to 200 really specific AI bills.
153
00:15:05,670 --> 00:15:08,290
That's Steven Adler's analysis.
154
00:15:08,290 --> 00:15:18,868
Whereas if you go talk to someone like Adam Thayer at R Street, he'll tell you there are
hundreds, if not a thousand or more AI pieces of legislation pending before the states.
155
00:15:18,868 --> 00:15:25,062
And so it seems as though we may be on the precipice of a sort of AI patchwork.
156
00:15:25,249 --> 00:15:32,749
Yeah, and to your point, that sounds really difficult for businesses and commerce to
navigate.
157
00:15:32,749 --> 00:15:37,749
And I'm wondering, have we just kicked the can down the road?
158
00:15:37,749 --> 00:15:50,489
Because the path of each state making its own unique set of rules sounds completely
unsustainable from where I sit as a business owner and someone who uses the technology
159
00:15:50,489 --> 00:15:51,949
every day.
160
00:15:52,649 --> 00:15:53,789
Is that?
161
00:15:53,865 --> 00:16:05,317
You know, have we just postponed the Fed, you know, stepping in and making some rules or
is this, are we, is the status quo going to be around for a little while?
162
00:16:05,317 --> 00:16:06,252
Do we know?
163
00:16:06,252 --> 00:16:16,432
Yeah, if I had to bet and I'll preface by saying I'm not a betting man because if you
check my March Madness bracket uh each April, you'll see what a disaster it is.
164
00:16:16,633 --> 00:16:28,925
But if you look at the current political winds, I think we're going to see at least a
handful of states uh like New York with the Raise Act sponsored by Assemblymember Boris.
165
00:16:28,925 --> 00:16:30,314
uh
166
00:16:30,314 --> 00:16:42,004
If we look at Colorado, which is actively working towards implementing the Colorado AI
Act, and if we look toward California, which has already passed a bevy of AI specific
167
00:16:42,004 --> 00:16:45,146
laws, this patchwork is coming.
168
00:16:45,146 --> 00:16:50,731
And so when that patchwork does develop, we have a couple questions to ask.
169
00:16:50,731 --> 00:16:52,933
And this is my concern.
170
00:16:52,933 --> 00:16:59,878
So if you talk to folks about laboratories of democracy, they'll tell you this is exactly
how
171
00:17:00,002 --> 00:17:01,323
federalism supposed to work.
172
00:17:01,323 --> 00:17:01,973
This is great.
173
00:17:01,973 --> 00:17:08,527
We have states experimenting with different novel approaches to a tricky regulatory
solution.
174
00:17:09,008 --> 00:17:14,332
Well, the issue there is that AI isn't contained by state borders, right?
175
00:17:14,332 --> 00:17:24,158
This isn't something like regulating a specific school district in your community or
regulating a specific natural resource that's just in your state.
176
00:17:24,376 --> 00:17:33,789
how you regulate AI can have huge ramifications on how AI is developed and deployed across
the entire country.
177
00:17:33,789 --> 00:17:42,311
And so I think that's one key element to point out is that laboratories of democracy imply
that they're operating in Petri dishes.
178
00:17:42,311 --> 00:17:44,451
And yet these Petri dishes have been broken.
179
00:17:44,451 --> 00:17:50,693
And so one state's AI regulation is going to flood into and impact other states.
180
00:17:50,893 --> 00:17:54,434
Another key thing to point out about laboratories
181
00:17:54,474 --> 00:18:00,357
and I'm a sucker for puns and metaphors, so apologize for leaning so heavily into this.
182
00:18:00,437 --> 00:18:05,460
But when you think about laboratories, you're talking about experiments, right?
183
00:18:05,480 --> 00:18:12,624
Well, experiments imply that you're going to learn from and adjust and change based off of
the results.
184
00:18:12,764 --> 00:18:21,889
But something we don't see in a lot of these state laws are things like sunset clauses,
things that would say, okay, we're gonna try this law for two years.
185
00:18:21,889 --> 00:18:23,810
At the end of the two years, we're going to
186
00:18:23,810 --> 00:18:28,332
reevaluate, should we move forward with this legislation or should we change it?
187
00:18:28,332 --> 00:18:40,437
We don't see huge outlays, huge investments in things like retrospective review, where we
would perhaps identify outside stakeholders and independent experts to evaluate whether
188
00:18:40,437 --> 00:18:42,418
that legislation worked as intended.
189
00:18:42,418 --> 00:18:47,950
If we had those safeguards in place to be able to say, was this a good idea in retrospect?
190
00:18:47,950 --> 00:18:52,300
Should we move forward with this or do we need to go back to the drawing board?
191
00:18:52,300 --> 00:18:56,734
I think that would make a lot of folks who are concerned about this patchwork more
comfortable.
192
00:18:56,734 --> 00:19:06,652
And I hope that state legislators consider investing in and moving forward with that sort
of, with those sorts of safeguards, but I haven't seen that so far.
193
00:19:06,685 --> 00:19:07,276
Interesting.
194
00:19:07,276 --> 00:19:18,906
And then how do, I don't know if the New York Times suit against OpenAI was in federal
court or state court, but you know, there was a ruling where they had to essentially
195
00:19:18,906 --> 00:19:27,173
retain history for a certain period of time that created all sorts of other unintended
consequences.
196
00:19:27,173 --> 00:19:33,628
Like how, how are we going to navigate scenarios like, like that in the current state?
197
00:19:33,858 --> 00:19:42,264
Yeah, so right now the pending legislation, excuse me, the pending litigation between the
New York Times and OpenAI, that's in federal district court.
198
00:19:42,264 --> 00:19:54,822
And this preservation requirement of basically saving uh queries that have been entered to
OpenAI has caused a lot of alarm bells to go off, especially in the legal community.
199
00:19:54,822 --> 00:20:03,378
I've already talked to folks at uh various firms who say that they've had partners,
they've had clients coming to them and saying, see,
200
00:20:03,416 --> 00:20:06,188
This is exactly why we shouldn't use AI.
201
00:20:06,188 --> 00:20:16,756
And uh now we see that our queries may be retained and who knows what that means for
maintaining client confidentiality and attorney-client privilege.
202
00:20:16,756 --> 00:20:20,118
And so this has opened up a pretty big can of worms.
203
00:20:20,118 --> 00:20:25,542
And this all speaks to the fact that we need some regulatory clarity.
204
00:20:25,643 --> 00:20:29,545
We know that when we have a absence of...
205
00:20:29,545 --> 00:20:30,816
uh
206
00:20:30,816 --> 00:20:43,513
safeguards and an absence of knowledge about how and when laws are going to be enforced or
how especially outdated and antiquated rules and norms in various professions, how those
207
00:20:43,513 --> 00:20:49,897
are going to be applied in this new novel context, really adds to an unhelpful degree of
ambiguity.
208
00:20:49,897 --> 00:20:59,822
And um it's also important to note that should we feel comfortable from a bigger D
democracy question with the fact that
209
00:20:59,822 --> 00:21:07,949
one judge sitting in a federal district court is upending a lot of use cases of AI right
now.
210
00:21:07,949 --> 00:21:09,251
A lot of people are skeptical.
211
00:21:09,251 --> 00:21:10,852
A lot of people are scared.
212
00:21:10,852 --> 00:21:22,342
And this is another reason why we should be having a national conversation about AI and
pressing Congress's feet to the fire to say, need a national vision.
213
00:21:22,342 --> 00:21:26,668
We need clarity so that we can prevent this sort of patchwork approach.
214
00:21:26,668 --> 00:21:37,257
And so that courts know how to proceed rather than kind of uh seemingly developing some
unclear uh steps via these bespoke pieces of litigation.
215
00:21:37,257 --> 00:21:42,608
Yeah, and like how does that impact OpenAI relative to its competitors?
216
00:21:42,608 --> 00:21:49,710
Like, you know, I actually do a fair amount of legal analysis in the AI models for a
variety of things.
217
00:21:49,710 --> 00:21:59,643
If I have a new hire and they have a non-compete, they have non-compete language that I
have to figure out, navigate my way through, or, you know, we're dealing with an operating
218
00:21:59,643 --> 00:22:04,644
agreement amendment right now amongst the partners at InfoDash and I have been digging
deep.
219
00:22:04,644 --> 00:22:07,295
I've been using other models
220
00:22:07,295 --> 00:22:18,416
because I don't want, mean, it feels like it's really putting um a burden on OpenAI
relative to its competitors.
221
00:22:18,416 --> 00:22:20,157
Is that accurate?
222
00:22:20,248 --> 00:22:34,762
Yeah, I don't have specific insight into whether their monthly average user count, for
example, has taken a hit or if uh we've seen any major changes to their clientele,
223
00:22:34,762 --> 00:22:37,783
especially with respect to large enterprises.
224
00:22:37,803 --> 00:22:41,254
My hunch is that things have definitely slowed.
225
00:22:41,254 --> 00:22:47,535
I know a lot of companies are using CoPilot and they're saying, my gosh, why are we using
CoPilot?
226
00:22:47,535 --> 00:22:50,038
Can we find anything else to switch to, which is
227
00:22:50,038 --> 00:22:51,698
a whole nother conversation.
228
00:22:51,918 --> 00:22:56,980
And they probably initially were saying, great, let's just go to OpenAI.
229
00:22:56,980 --> 00:23:05,273
But the second you get a lawyer in the room who's aware of this preservation request and
worried about that language and worried about this perhaps occurring again in the future,
230
00:23:05,273 --> 00:23:07,983
that may slow things down.
231
00:23:07,983 --> 00:23:14,305
So I think you're right to say this minimally isn't helping increase OpenAI's user base.
232
00:23:14,305 --> 00:23:16,876
ah I will say that the
233
00:23:16,876 --> 00:23:24,010
the sheer number of users they already have and the sophistication of 03, for example, and
just kind of the head start they've maintained.
234
00:23:24,110 --> 00:23:36,918
I don't think this is catastrophic for OpenAI, but if anything, I think it's more more
headwinds to the industry as a whole uh that, you know, kind of validates, rightly or
235
00:23:36,918 --> 00:23:43,379
wrongfully, concerns about whether these are viable tools for the long term for uh
professionals.
236
00:23:43,379 --> 00:23:51,678
Yeah, and it also brings up another interesting um dynamic, is, is it going to increase
investments?
237
00:23:51,678 --> 00:23:55,322
I think these things benefit Metta, um right?
238
00:23:55,322 --> 00:24:02,430
And the open source scenarios that you can self-host and essentially control the
environment in which you engage.
239
00:24:02,430 --> 00:24:04,620
um I don't know.
240
00:24:04,620 --> 00:24:05,983
Do you agree?
241
00:24:06,254 --> 00:24:07,834
I would definitely agree.
242
00:24:07,834 --> 00:24:13,574
think that the future will probably look a lot more open source.
243
00:24:13,574 --> 00:24:21,254
know that, fortunately, Sam Altman has tipped his hand and said that OpenAI wants to go
the open source route.
244
00:24:21,374 --> 00:24:34,994
We know that Meta is stealing more talent than the Lakers do in the off season in terms of
the number of AI experts they've poached from OpenAI as well as from ScaleAI.
245
00:24:35,158 --> 00:24:45,901
And so I think if you just look at how this race is going to develop, more and more large
enterprises are going to want to exercise more and more control over their models.
246
00:24:45,941 --> 00:24:49,142
And open sourcing just makes that far more feasible.
247
00:24:49,142 --> 00:25:01,285
um There's also been an evolution, I'd say, in the national security conversation around
open AI, or excuse me, an evolution in the national security conversation around open
248
00:25:01,285 --> 00:25:02,046
source.
249
00:25:02,046 --> 00:25:03,586
I think for a long time,
250
00:25:03,586 --> 00:25:14,674
there was a concern that open-sourcing models would lead to bad actors getting their hands
on those models sooner rather than later and using them for nefarious purposes.
251
00:25:14,954 --> 00:25:29,485
Following DeepSeek, which I guess is almost uh seven months old now, that DeepSeek moment
made a lot of people realize that the US moat with respect to peers and adversaries like
252
00:25:29,485 --> 00:25:31,686
China isn't as
253
00:25:31,754 --> 00:25:34,686
Extensive isn't as wide as previously imagined.
254
00:25:34,686 --> 00:25:48,524
And so if we can get more sophisticated AI tools like open source models in more hands, we
can collectively be a more savvy AI nation, a more uh thoughtful AI nation with respect to
255
00:25:48,524 --> 00:25:58,970
being able to test these models and probe them uh and use the whole of America's AI
expertise to make sure we are developing the most advanced and most sophisticated AI
256
00:25:58,970 --> 00:25:59,647
models.
257
00:25:59,647 --> 00:26:14,792
Yeah, know, shifting gears a little bit, taking everything you just said and then looking
at the legal industry, specifically big law, you know, I'm, I'm of the opinion that the
258
00:26:14,792 --> 00:26:23,901
future is the future for law firms is not a scenario where they buy ready-made off the
shelf tools.
259
00:26:23,955 --> 00:26:30,808
like Harvey and Legora that are great tools and not saying you shouldn't leverage those
tools, but they don't create differentiation.
260
00:26:30,849 --> 00:26:31,249
Right.
261
00:26:31,249 --> 00:26:37,292
If you're, if your competitor down the street can buy the same tools as you by definition,
there's no differentiation there.
262
00:26:37,292 --> 00:26:42,595
Now, how you, how you build workflows and how you use those tools can differentiate.
263
00:26:42,595 --> 00:26:53,951
But, you know, I'm of the belief that longer term, um, that law firms are going to have to
invest in strategies that leverage their data.
264
00:26:54,213 --> 00:27:00,579
and create solutions within their four walls using things like Azure OpenAI, Azure AI
Search.
265
00:27:00,579 --> 00:27:04,632
We're actually putting our chips on that part of the table ourselves here at InfoDash.
266
00:27:04,632 --> 00:27:12,608
We're an intranet and extranet company, but we have something called the integration hub
that we deploy that makes our product work.
267
00:27:12,709 --> 00:27:22,771
And it lives in the client's Azure tenant and it has tentacles into all the back office
systems and respects security trimming, ethical wall boundaries.
268
00:27:22,771 --> 00:27:27,865
And then that enables firms to tap in using Azure AI Search.
269
00:27:27,865 --> 00:27:34,100
If they want to crawl and index their practice management solution, we've enabled them to
do that.
270
00:27:34,100 --> 00:27:42,377
If they want to, we've got a labor and employment firm who has all of this amazing labor
and employment data that they compile for all 50 states.
271
00:27:42,377 --> 00:27:46,880
And they also have all of their clients, employment agreements, employee handbooks.
272
00:27:46,880 --> 00:27:49,482
And we're like, hey, wait minute, you got the ingredients here.
273
00:27:49,683 --> 00:27:52,755
Use our integration hub, tap into there, build an
274
00:27:52,755 --> 00:28:04,890
open Azure AI search and Azure Open AI, go flag all the exceptions and instead of your
clients having to log in and peruse the new regulatory updates in Wisconsin, you
275
00:28:04,890 --> 00:28:08,872
proactively go to them and say, hey, look, you've got exceptions and we can help you
remediate them.
276
00:28:08,872 --> 00:28:10,453
I see that as the future.
277
00:28:10,453 --> 00:28:10,873
I don't know.
278
00:28:10,873 --> 00:28:12,078
How do you view that?
279
00:28:12,078 --> 00:28:14,139
You know, am...
280
00:28:15,000 --> 00:28:22,904
The thing, I could scream from the uh rooftops or mountaintops you pick, would really be
doubling down on this data question.
281
00:28:22,904 --> 00:28:32,390
Because I think that folks are realizing that access to compute, even though it's
difficult, that's going to be something that's available.
282
00:28:32,390 --> 00:28:40,374
Access to the best algorithms, yes, we're going to see some people differentiate
themselves with respect to the efficiency of those algorithms.
283
00:28:40,586 --> 00:28:52,222
Access to talent, obviously a huge one as well, but when it comes to identifying narrow AI
use cases, that AI use cases that are going to have real practical, meaningful impact on
284
00:28:52,222 --> 00:28:57,095
businesses, on society, on government, it all comes back to quality data.
285
00:28:57,195 --> 00:29:07,060
And you and I had a conversation earlier about what's an analogy perhaps of some of the
misuse we're seeing in AI right now.
286
00:29:07,060 --> 00:29:10,348
And for me, it kind of goes back to this notion of
287
00:29:10,348 --> 00:29:11,878
a Model T car.
288
00:29:11,938 --> 00:29:21,361
You have this tool and if you're driving on streets in 1907 and you've got a Model T, are
you going to drive across the country?
289
00:29:21,361 --> 00:29:22,782
No, you just won't make it.
290
00:29:22,782 --> 00:29:24,922
There's not the proper infrastructure there.
291
00:29:24,922 --> 00:29:26,343
There's no gas stations.
292
00:29:26,343 --> 00:29:28,103
There's no highway system.
293
00:29:28,103 --> 00:29:32,364
Are you even going to be able to drive it across town reliably?
294
00:29:32,364 --> 00:29:33,535
Maybe not, right?
295
00:29:33,535 --> 00:29:35,585
It depends on the context.
296
00:29:35,685 --> 00:29:38,486
And when you have people right now taking
297
00:29:38,622 --> 00:29:44,864
You know, as you mentioned, just kind of generative AI tools readily available to the rest
of the competition.
298
00:29:44,864 --> 00:29:52,968
And you try to use that for your most sophisticated use case to tailor your best brief to
craft a really bespoke contract.
299
00:29:52,968 --> 00:29:58,710
It's going to fail unless you're training it on the best high quality data.
300
00:29:58,710 --> 00:30:00,160
And to your point,
301
00:30:00,426 --> 00:30:03,138
Large law firms, they're already leaning into this.
302
00:30:03,138 --> 00:30:15,597
They're working with the open AIs of the world to say, help us craft a proprietary version
of ChatGPT that's been trained specifically on our vast troves of data.
303
00:30:15,597 --> 00:30:27,805
If you think about some of these incredibly large law firms that have an international
presence, that have been in operations for decades, that have been creating contracts, uh
304
00:30:27,805 --> 00:30:29,216
thousands of them.
305
00:30:29,216 --> 00:30:31,928
a year, if not millions of them a year.
306
00:30:31,988 --> 00:30:42,776
The sheer quantity of that data is going to be a huge asset for them to be able to create
AI tools that uh give them a meaningful advantage over the competition.
307
00:30:42,776 --> 00:30:53,893
And that's arguably my biggest concern is that we're going to see the largest firms
continue to build a larger and larger advantage over those small mom and pop shops, for
308
00:30:53,893 --> 00:30:57,944
example, over those boutique law firms who they don't have.
309
00:30:57,944 --> 00:31:09,941
Thousands of contracts or millions of contracts to train a model on and so I'm a little
bit concerned about what the nature the competitive landscape of the legal Ecosystem looks
310
00:31:09,941 --> 00:31:11,521
like a few years from now
311
00:31:11,617 --> 00:31:13,317
Yeah, I mean, that's a great point.
312
00:31:13,317 --> 00:31:21,177
So I think that, um, it, know, there's a, there's a lot of dynamics in the legal
marketplace that are somewhat unique.
313
00:31:21,177 --> 00:31:23,437
First of all, it's extremely fragmented.
314
00:31:23,657 --> 00:31:28,377
the top five firms control less than 7 % of the market.
315
00:31:28,377 --> 00:31:33,017
It's about 350 billion ish in legal spend.
316
00:31:33,185 --> 00:31:38,168
And um the entire Amla 100 controls less than half.
317
00:31:38,168 --> 00:31:47,674
If you look at other industries, I use the big four because that's very, that's kind of
the closest analog is accounting um and audit.
318
00:31:47,674 --> 00:31:59,731
And they, the big four controlled 97 % of the audit work of all US public companies,
Completely different um concentration makeup there.
319
00:31:59,731 --> 00:32:09,526
So I look at, okay, the AMLAL today, extremely fragmented, very bespoke culture with, that
has not really embraced innovation historically.
320
00:32:09,526 --> 00:32:20,011
They're laggards, they're in a partnership model with a cash basis accounting that
prioritizes profit taking instead of capital expenditure and R &D.
321
00:32:20,011 --> 00:32:25,249
And I struggle to see on both ends, all really all ends of the spectrum, how
322
00:32:25,249 --> 00:32:28,889
How do we get to 2.0, big law and small law?
323
00:32:28,889 --> 00:32:29,189
I don't know.
324
00:32:29,189 --> 00:32:30,649
Do you have any thoughts on that?
325
00:32:30,680 --> 00:32:42,653
You know, I think the first place it has to start with is law schools, which is why I'm so
thrilled to be exactly where I am because we need to get more law school students who are
326
00:32:42,653 --> 00:32:45,834
increasingly thinking in an entrepreneurial lens, right?
327
00:32:45,834 --> 00:32:50,036
They've grown up in an era of move fast and break things.
328
00:32:50,036 --> 00:32:59,258
And increasingly, when I have new students come here to UT, they will have gone to
undergrad institutions that have enterprise level.
329
00:32:59,402 --> 00:33:00,563
AI accounts, right?
330
00:33:00,563 --> 00:33:10,188
They're going to have four years of experience, hopefully meaningful experience and not
just generating that essay at, you know, 1130 PM before the deadline, but some meaningful
331
00:33:10,188 --> 00:33:12,009
experience using these tools.
332
00:33:12,009 --> 00:33:20,134
And then they're going to come in to schools like UT and I'm going to be able to connect
them with companies like Rev here in Austin.
333
00:33:20,134 --> 00:33:27,406
Rev is developing transcription tools that can be used in depositions, for example, ah to
be able to
334
00:33:27,406 --> 00:33:31,146
pick up on new insights that were perhaps would otherwise go missed.
335
00:33:31,146 --> 00:33:43,406
I'm gonna be able to connect them with folks like Novo here in Austin that is changing the
workflow of personal injury attorneys and compiling medical documentation.
336
00:33:43,406 --> 00:33:54,106
And so if I can expose them to those tools as a 1L or a 2L or as a 3L, and they're the
sorts of folks who are thinking about that next generation of law, then they can be on the
337
00:33:54,106 --> 00:33:55,948
vanguard of shaping
338
00:33:55,948 --> 00:34:06,667
the law firms of the future because I really do believe that as much of a big advantage as
law firms may have right now, the largest law firms may have right now and as great of an
339
00:34:06,667 --> 00:34:21,239
advantage they may have with respect to data as we discussed, if you're a client and
someone comes to you and says, look, I've gotten rid of all of the waste, uh all of the
340
00:34:21,239 --> 00:34:25,056
rainmaker funds that you're going to pay if you're going to go with the biggest firm.
341
00:34:25,056 --> 00:34:33,762
and I am this agile, client-forward, AI-first law firm, I think I know who I want to go
with, right?
342
00:34:33,762 --> 00:34:34,913
And that's the bet.
343
00:34:34,913 --> 00:34:47,671
That's the thing we have to lean into as legal educators and as a whole legal ecosystem,
because I'm most excited by the potential for AI to really lower access to justice
344
00:34:47,671 --> 00:34:54,668
barriers, um the kind of thing that the legal community loves to hide and not
345
00:34:54,668 --> 00:34:58,560
discuss is that we have a huge access to justice gap.
346
00:34:58,560 --> 00:35:12,969
If you look at the recent California State Bar Report from 2024 analyzing how likely it
was that an individual who has a civil legal issue actually receives legal counsel, it's
347
00:35:12,969 --> 00:35:16,011
staggeringly low and it's problematically low.
348
00:35:16,011 --> 00:35:20,233
And so I think that the legal community has an obligation.
349
00:35:20,233 --> 00:35:21,390
If you look at our
350
00:35:21,390 --> 00:35:33,240
uh Rules of professional conduct whether it's the ABA model rules or a state's rules of
professional conduct every lawyer has an obligation to the quality of the justice system
351
00:35:33,520 --> 00:35:44,870
and quality has to mean that we provide everyone who has a right to defend a right to
assert with meaningful guidance and right now we just don't have enough lawyers, but we
352
00:35:44,870 --> 00:35:49,676
can meet that need or we can vastly expand our ability to meet that need with AI so
353
00:35:49,676 --> 00:36:01,766
That's where I get excited and that's where I really say if we have a next generation of
lawyers leaning into AI, they might manage to disrupt some of these uh really stodgy uh
354
00:36:01,766 --> 00:36:04,585
inertial dynamics of the legal marketplace.
355
00:36:04,585 --> 00:36:15,138
Yeah, and you know that that would eliminate uh a key lever if we really de lower the bar
for access to justice.
356
00:36:15,138 --> 00:36:22,340
A very common tactic is, you know, financial means, right?
357
00:36:22,340 --> 00:36:31,133
Like I know if I've got more dollars to spend on a legal proceeding than you do, that is
leverage for me.
358
00:36:31,133 --> 00:36:31,583
Right?
359
00:36:31,583 --> 00:36:32,393
So
360
00:36:33,178 --> 00:36:42,229
Having that dynamic diminished, think really changes the game and maybe produces better
outcomes.
361
00:36:42,382 --> 00:36:55,882
Yeah, and I think this should be a moment where the legal industry and the legal academy
looks at some of the systems and assumptions we've been making for almost 100 years and
362
00:36:55,882 --> 00:36:57,382
takes those head on.
363
00:36:57,422 --> 00:37:02,242
The federal rules of civil procedure were written in 1938.
364
00:37:03,182 --> 00:37:03,942
1938?
365
00:37:03,942 --> 00:37:08,554
That's almost a century ago, and we're still adhering to
366
00:37:08,554 --> 00:37:19,238
arbitrary deadlines that someone thought would be good, where it's still unsure of exactly
what you need to include in your complaint to survive a motion to dismiss.
367
00:37:19,278 --> 00:37:31,353
These are ludicrous, antiquated ways of thinking about how people should be able to assert
their rights in a country that really prizes itself on the rule of law and everyone being
368
00:37:31,353 --> 00:37:32,844
equal under the law.
369
00:37:32,844 --> 00:37:35,675
That's just not the case under these outdated systems.
370
00:37:35,675 --> 00:37:37,858
And so I'm optimistic that
371
00:37:37,858 --> 00:37:52,186
This is a time for creative thinking uh and for folks from across different disciplines to
come to lawyers and say, hey, let us help you revise uh these norms and these rules so
372
00:37:52,186 --> 00:37:54,249
that you can better fulfill your purpose.
373
00:37:54,249 --> 00:38:04,499
Yeah, you know, and along those lines, the it's obviously it's going to change the way of
that law firms price, right?
374
00:38:04,499 --> 00:38:05,801
Their pricing strategies.
375
00:38:05,801 --> 00:38:08,853
And you're seeing some really interesting challenging firms in the UK.
376
00:38:08,853 --> 00:38:18,813
You have Garfield Law that's it is a AI uh first or maybe AI only kind of small claims.
377
00:38:19,365 --> 00:38:25,208
I don't know if they're a tech company or a law firm, you know, the rules are different
over there with the Legal Services Act.
378
00:38:25,208 --> 00:38:29,051
And now you have Crosby AI here in the US.
379
00:38:29,051 --> 00:38:32,092
It's a really interesting time to be a challenger firm.
380
00:38:32,092 --> 00:38:40,817
But you know, whenever I hear and I talk a lot, in fact, I would just attended a
conference uh inside practice event in New York on pricing.
381
00:38:40,817 --> 00:38:47,881
It's actually financial management and innovation, but we talked a lot about pricing and
um
382
00:38:47,881 --> 00:38:57,666
You know, a lot of people like to throw up concepts that sound good, like outcome based
pricing and value based pricing.
383
00:38:57,666 --> 00:39:03,428
know, I think, yes, that makes sense to me, but there's, there's challenges with that.
384
00:39:03,428 --> 00:39:04,609
So here in St.
385
00:39:04,609 --> 00:39:12,592
Louis, where I live, all the plumbing companies, I don't know if they've banded together,
but they've decided that they are no longer doing time and materials work.
386
00:39:12,592 --> 00:39:14,523
They only do flat fee work.
387
00:39:14,527 --> 00:39:18,821
and they will not give you a breakdown of labor versus materials.
388
00:39:18,841 --> 00:39:29,331
And as a consumer, that creates um a opaque uh veil between me and my ability to see if
I'm getting a fair deal.
389
00:39:29,331 --> 00:39:38,300
um But uh I had some work done in my basement, and they came in, and I had a leak in a
sewer line.
390
00:39:39,073 --> 00:39:48,300
You know, I sat back and thought about it like, okay, what is it worth to me to not have
my basement, my sewer flood, my base, quite a lot, but that's not, I'm not going to base
391
00:39:48,300 --> 00:39:53,944
my willingness to pay a price based on that value or that outcome.
392
00:39:53,944 --> 00:39:56,926
It still comes back to supply and demand, right?
393
00:39:56,926 --> 00:40:05,392
In other words, if I can find another plumber to deliver the same outcome for less money,
then I'm going that direction.
394
00:40:05,392 --> 00:40:07,041
You can't say, well, it's worth
395
00:40:07,041 --> 00:40:08,621
So my basement did flood.
396
00:40:08,621 --> 00:40:10,741
cost me about 45 grand.
397
00:40:10,901 --> 00:40:21,741
Um, I had some insurance, but, um, uh, so that offset some of it, but so I know the exact
cost of, of a flood down there, but I'm, you know, they can't say, well, it's going to be
398
00:40:21,741 --> 00:40:22,721
15%.
399
00:40:22,721 --> 00:40:24,021
That's a fair price.
400
00:40:24,021 --> 00:40:27,921
Like in the legal world, I look at it like that, like, yes, okay.
401
00:40:27,981 --> 00:40:34,667
The value that you're delivering and the outcome that you may be preventing or enabling
does have a dollar figure.
402
00:40:34,667 --> 00:40:40,728
But you being able to charge a portion of that is also influenced by supply and demand.
403
00:40:40,728 --> 00:40:41,969
So I don't know.
404
00:40:42,557 --> 00:40:45,840
How do you see that in pricing situation?
405
00:40:45,840 --> 00:40:56,733
you know, the pricing one, I'll say leaning into my my earlier comment, I'd say it's not
my area of expertise in terms of thinking through how this will exactly change kind of
406
00:40:56,733 --> 00:40:58,574
those firm pricing tactics.
407
00:40:58,574 --> 00:41:10,557
But I will agree with you that I think it is so essential that we use this moment to get
back to first principles about what is it that we're actually trying to achieve with our
408
00:41:10,557 --> 00:41:11,877
justice system.
409
00:41:11,937 --> 00:41:15,398
And if it's just getting money out of the litigants.
410
00:41:15,788 --> 00:41:17,319
That's a problem, right?
411
00:41:17,319 --> 00:41:23,983
And I think we need to really use this moment to explore ideas like regulatory sandboxes.
412
00:41:23,983 --> 00:41:36,669
So talking earlier about my encouragement and advocacy for sunset clauses and for
retrospective review, that should be the case in the legal industry as well and how we
413
00:41:36,669 --> 00:41:37,970
govern ourselves.
414
00:41:37,970 --> 00:41:45,272
So I want to see more states uh actually have some degree of experimentation with how is
this new
415
00:41:45,272 --> 00:41:54,787
tool being used, how is this new pricing system being used, who's implicated, who's not
litigating their claims, who's litigating too many claims.
416
00:41:54,787 --> 00:42:02,681
All of this should be tracked, monitored, analyzed, shared, and used as the basis to
inform our rules going forward.
417
00:42:02,681 --> 00:42:07,194
But we're not a very empirically savvy profession, right?
418
00:42:07,194 --> 00:42:15,148
The fact that tech justice and tech law is something that seemingly appeared a decade or
so ago or two decades ago.
419
00:42:15,158 --> 00:42:19,800
is pretty indicative of a profession that's been around arguably since the beginning of
time.
420
00:42:20,080 --> 00:42:26,083
So, you know, maybe we could improve the extent to which we're trying to really monitor
how we're doing.
421
00:42:26,083 --> 00:42:31,425
And I hope there is some experimentation here because the stakes are so high to your
point, Ted.
422
00:42:31,425 --> 00:42:42,750
And what I think is also going to be uh something that I think will also happen that we
should keep our eye on is how is the private sector changing the way it adjudicates its
423
00:42:42,750 --> 00:42:43,980
own claims?
424
00:42:44,034 --> 00:42:54,060
So how are we going to see businesses, for example, start to negotiate with one another
rather than going to the typical public justice system?
425
00:42:54,060 --> 00:43:02,184
They're going to start sending over disputes and claims to AI judges and to AI
adjudication systems.
426
00:43:02,184 --> 00:43:02,894
Why?
427
00:43:03,205 --> 00:43:12,680
Well, rather than waiting for months or years for that dispute to be resolved, they're
just going to outsource it to an agreed upon AI system.
428
00:43:13,000 --> 00:43:22,786
And we should actually pay a lot of attention to how those systems are working and whether
in certain contexts they may be appropriate to use to resolve some public disputes as
429
00:43:22,786 --> 00:43:23,455
well.
430
00:43:23,455 --> 00:43:25,086
Yeah, that makes a lot of sense.
431
00:43:25,086 --> 00:43:27,056
We only have a couple of minutes left, but I want it.
432
00:43:27,056 --> 00:43:32,228
I want you uh to touch on a topic that you wrote about that I find really interesting.
433
00:43:32,228 --> 00:43:35,889
And that's around like knowledge diffusion and AI literacy.
434
00:43:35,889 --> 00:43:41,731
And I know that's probably we could spend the whole episode just talking about that, but
it's such an interesting topic.
435
00:43:41,731 --> 00:43:49,833
Like, can you give us a Reader's Digest version of what you of what that means and how it
impacts AI literacy?
436
00:43:50,188 --> 00:43:52,829
Yeah, so let's imagine a hypothetical.
437
00:43:52,829 --> 00:43:57,141
I'm a law professor after all, so I have to throw out a hypo every now and again.
438
00:43:57,141 --> 00:44:00,442
Let's say tomorrow we get AGI.
439
00:44:00,442 --> 00:44:14,018
OpenAI says, we've announced the most sophisticated AI tool capable of detecting cancer at
100 % accuracy, capable of tutoring everyone according to their learning style and
440
00:44:14,018 --> 00:44:15,128
learning abilities.
441
00:44:15,128 --> 00:44:16,909
All of that's available tomorrow.
442
00:44:17,509 --> 00:44:19,870
I don't think we'd actually make a ton of use of it.
443
00:44:20,236 --> 00:44:20,606
Right?
444
00:44:20,606 --> 00:44:27,471
If it came about tomorrow, we'd have the American Medical Association would want to kick
the tires of that AI.
445
00:44:27,471 --> 00:44:34,255
We'd have parent-teacher associations that would want to thoroughly vet any implementation
of that AI.
446
00:44:34,255 --> 00:44:37,698
School districts, state bars, as we've talked about.
447
00:44:37,698 --> 00:44:38,778
You name the profession.
448
00:44:38,778 --> 00:44:42,400
You name all of these different barriers and frictions.
449
00:44:42,561 --> 00:44:45,062
In many cases, I think those are appropriate.
450
00:44:45,270 --> 00:44:56,475
We should have a degree of skepticism of making sure that before we introduce these AI
tools into really sensitive, really important use cases, let's make sure we're vetting
451
00:44:56,475 --> 00:44:56,695
them.
452
00:44:56,695 --> 00:44:59,956
Let's make sure we understand what we're about to proceed with.
453
00:45:00,537 --> 00:45:13,802
How we do that vetting and whether that vetting is actually successful and rational and
not based off of uh skepticism or fear or concerns about, uh you know,
454
00:45:14,050 --> 00:45:18,292
black swan events where the whole of society gets turned into paper clips.
455
00:45:18,332 --> 00:45:21,073
That's contingent upon AI literacy.
456
00:45:21,293 --> 00:45:25,615
Do folks have enough of an understanding of how the technology works?
457
00:45:25,615 --> 00:45:34,018
Do they have enough experience with the technology to know its best limitations or excuse
me, to know its limitations and its best use cases?
458
00:45:34,019 --> 00:45:39,741
Do they have a willingness to experiment with that technology in really important cases?
459
00:45:39,941 --> 00:45:42,456
If the answer is no to those questions,
460
00:45:42,456 --> 00:45:47,488
then it doesn't matter if America is the first to achieve AGI, right?
461
00:45:47,488 --> 00:46:00,012
That's my big concern about the lack of emphasis we've placed on knowledge diffusion
because right now uh we know that China, for example, is investing heavily in increasing
462
00:46:00,012 --> 00:46:03,633
the number of PhDs with expertise in AI.
463
00:46:03,633 --> 00:46:07,294
We know that other countries are actively trying to solicit.
464
00:46:07,338 --> 00:46:15,683
as many AI experts as possible to move to their country and to lend their expertise to
their governments, to their businesses, to their schools.
465
00:46:15,683 --> 00:46:23,126
Estonia has a mandate for all of their public school students to be exposed to AI.
466
00:46:23,787 --> 00:46:26,988
Where do we see that sort of vision here in the States?
467
00:46:27,089 --> 00:46:33,932
We've yet to have meaningful, uh for example, what I've called for an AI education core.
468
00:46:33,932 --> 00:46:37,314
Why aren't we using our community colleges, for instance,
469
00:46:37,314 --> 00:46:45,639
to help train and deploy folks who can then go to small businesses in their community and
say, here's an AI tool that would really help you out.
470
00:46:45,639 --> 00:46:48,541
And let me help you integrate that into your small business.
471
00:46:48,541 --> 00:46:58,467
We can have public libraries serve as hubs for AI companies to come do demonstrations for
people to learn about the latest and greatest AI.
472
00:46:58,467 --> 00:47:00,438
These steps are really important.
473
00:47:00,438 --> 00:47:03,540
And for listeners who are thinking, OK, well,
474
00:47:03,544 --> 00:47:11,916
You know, this all sounds nice and yeah, it would be excellent if we could diffuse all
this and uh increase the general level of AI literacy.
475
00:47:12,136 --> 00:47:18,058
I encourage those folks who are maybe a little skeptical to go read the work of Jeffrey
Ding.
476
00:47:18,058 --> 00:47:24,019
Jeffrey Ding is an economist and he's studied this diffusion question closely.
477
00:47:24,020 --> 00:47:31,131
And in the context of the Cold War, it was often the USSR who was the first to innovate,
right?
478
00:47:31,131 --> 00:47:33,132
They were the first to get to Sputnik.
479
00:47:33,132 --> 00:47:39,725
For example, they made a lot of early advances on weapon systems that we were lagging
behind.
480
00:47:39,826 --> 00:47:41,247
Why did we win?
481
00:47:41,247 --> 00:47:53,313
Well, we had more engineers, we had more scientists, we had more general expertise so that
we could turn those innovations into actual progress, into actual tangible goods and
482
00:47:53,313 --> 00:47:56,015
services in a much faster fashion.
483
00:47:56,015 --> 00:48:01,986
so knowledge diffusion really is the key to turning innovation into progress.
484
00:48:01,986 --> 00:48:04,294
And we need to place a greater emphasis on that.
485
00:48:04,839 --> 00:48:06,129
I couldn't agree more.
486
00:48:06,129 --> 00:48:13,052
I love the community college um idea and the public library idea that you pose.
487
00:48:13,052 --> 00:48:18,034
And I would say, let's start with some knowledge diffusion among the legislators.
488
00:48:18,034 --> 00:48:22,375
ah The ones making the rules, you know?
489
00:48:22,375 --> 00:48:30,121
not only them, but I'd also not be a good academic if I didn't uh err on being a little
self-promotional.
490
00:48:30,121 --> 00:48:38,747
I wrote a whole law review article called, what it was like, an F in judicial education.
491
00:48:38,747 --> 00:48:47,872
And it's all about how if you go talk to state judges, they're not getting recurring
meaningful education on the latest technology.
492
00:48:47,872 --> 00:48:58,329
If you go talk to Supreme Court judges on various state Supreme Courts, it's not like they
go get a briefing from OpenAI about how AI works.
493
00:48:58,329 --> 00:49:06,344
Like the rest of us, they're just trying to figure it out by doing some Googling or
perplexity searches, I guess now, or trying to hope that their clerks have learned about
494
00:49:06,344 --> 00:49:07,155
AI.
495
00:49:07,155 --> 00:49:12,658
That's not a really reliable, good strategy for a high-quality justice system.
496
00:49:12,831 --> 00:49:13,321
No doubt.
497
00:49:13,321 --> 00:49:20,204
had a judge on, um, God, must've been six months ago, Judge Scott Schlegel, um, in
Louisiana.
498
00:49:20,204 --> 00:49:31,129
And he, he gave a really good assessment of just the state of the judicial system, um,
technology-wide, not just AI specifically and their inability in their, in their lack of
499
00:49:31,129 --> 00:49:32,290
readiness around.
500
00:49:32,290 --> 00:49:39,583
Um, he works a lot with domestic violence cases and you know, the ability to use deep fake
technology.
501
00:49:39,687 --> 00:49:43,451
on both sides of the equation and just the risks around that.
502
00:49:43,451 --> 00:49:46,836
And it was like, a good episode.
503
00:49:46,836 --> 00:49:47,927
it's wild.
504
00:49:47,927 --> 00:49:59,458
And I think that the more we continue to see schools like UT, uh schools like Vanderbilt,
lean into AI and try to make sure the next generation is AI literate and achieving that
505
00:49:59,458 --> 00:50:04,293
sort of knowledge diffusion among key professionals, the better we can serve everyone.
506
00:50:04,293 --> 00:50:07,125
mean, the same goes for doctors as well, right?
507
00:50:07,125 --> 00:50:10,348
Do you want a doctor who doesn't trust?
508
00:50:10,816 --> 00:50:22,442
radiological AI tools despite them having 99 or 95 degree accuracy or far greater accuracy
than the human equivalent, I'd rather go to the AI doctor, right?
509
00:50:22,442 --> 00:50:25,331
So we need this across so many professions.
510
00:50:25,331 --> 00:50:26,884
Yeah, no, that's a great point.
511
00:50:26,884 --> 00:50:29,217
Well, this has been a great conversation.
512
00:50:29,217 --> 00:50:32,412
How to tell our listeners how to find out more.
513
00:50:32,412 --> 00:50:34,094
It sounds like you got a podcast.
514
00:50:34,094 --> 00:50:34,975
What's the name of it?
515
00:50:34,975 --> 00:50:36,277
How do they find your writing?
516
00:50:36,277 --> 00:50:38,338
How do they and how do they connect with you?
517
00:50:38,338 --> 00:50:39,038
Yeah, yeah.
518
00:50:39,038 --> 00:50:50,060
So if you want to listen to scaling laws, if you're interested in AI policy, AI
governance, check out scaling laws should be available on all podcast sites that you go
519
00:50:50,060 --> 00:50:50,943
to.
520
00:50:50,943 --> 00:51:01,648
If you want my own musings on AI, I write on sub stack at Appleseed AI, like Johnny
Appleseed, trying to spread the word, trying to diffuse some AI knowledge.
521
00:51:01,648 --> 00:51:06,549
And then you can always find me on X and Blue Sky at Kevin T.
522
00:51:06,549 --> 00:51:07,570
Frazier.
523
00:51:08,226 --> 00:51:14,633
Yeah, really appreciate the opportunity to talk with you Ted and hope we can do this again
because this was a hoot and a half.
524
00:51:14,633 --> 00:51:20,843
Yeah, we, I don't think we got to half of the agenda topics that we were talking about,
but it was a great discussion nonetheless.
525
00:51:20,843 --> 00:51:26,992
So, um, listen, have a great holiday weekend and, I look forward to the next conversation.
526
00:51:27,042 --> 00:51:29,198
Thank you and yeah, hope to see you in St.
527
00:51:29,198 --> 00:51:29,909
Louis sometime.
528
00:51:29,909 --> 00:51:31,311
That sounds great.
529
00:51:31,894 --> 00:51:32,415
All right.
530
00:51:32,415 --> 00:51:33,556
Thanks, Kevin.
531
00:51:33,986 --> 00:51:34,970
Thank you. -->
Subscribe
Stay up on the latest innovations in legal technology and knowledge management.