In this episode, Ted sits down with Professor Heidi K. Brown, Associate Dean for Upper Level Writing at New York Law School, to discuss the impact of AI on legal education and practice. From understanding AI hallucinations in legal cases to exploring best practices for integrating AI into legal research and writing, Heidi shares her expertise in legal education and technology. She emphasizes the importance of critical thinking, structured writing techniques, and ethical responsibilities when using AI. This conversation highlights how lawyers and law students can effectively navigate these advancements while maintaining strong research and writing fundamentals.
In this episode, Professor Brown shares insights on how to:
Identify and mitigate AI hallucinations in legal writing
Balance AI-generated content with traditional legal research methods
Use structured writing techniques to enhance legal documents
Develop effective AI prompting strategies for better legal outcomes
Understand the ethical responsibilities of lawyers using AI
Key takeaways:
AI can be a powerful tool, but it requires oversight to prevent inaccuracies in legal documents.
Lawyers and law students must maintain strong research and writing skills to critically assess AI-generated content.
AI hallucinations have already led to legal consequences, emphasizing the need for validation of AI outputs.
Effective prompt engineering can improve AI responses but does not replace the need for legal reasoning.
The legal profession must adapt to AI while ensuring ethical standards and best practices remain a priority.
About the guest, Professor Heidi K. Brown
Professor Heidi K. Brown is the Associate Dean for Upper Level Writing at New York Law School and a former construction litigator with over 30 years of experience in legal practice and academia. A passionate advocate for well-being in the legal profession, she has authored multiple books on lawyering and performance, including The Introverted Lawyer and The Flourishing Lawyer. Heidi also specializes in AI literacy and ethical AI integration in legal writing, designing courses and workshops to help lawyers and law students navigate emerging technologies responsibly.
As of today, we cannot solely rely on AI legal research and just be done with it and say, ‘oh, look how efficient we are.’ That’s not responsible. We need to check it against traditional legal research methods.”
1
00:00:01,772 --> 00:00:03,756
Heidi, how are you this afternoon?
2
00:00:03,756 --> 00:00:04,399
I'm great.
3
00:00:04,399 --> 00:00:04,908
How are you?
4
00:00:04,908 --> 00:00:06,220
Thanks so much for having me.
5
00:00:06,220 --> 00:00:08,270
Yeah, I appreciate you being here.
6
00:00:08,270 --> 00:00:15,288
I think you and I might have got connected through Jen Leonard maybe or...
7
00:00:15,288 --> 00:00:15,919
think so.
8
00:00:15,919 --> 00:00:20,988
a huge fan of Jen Leonard and your mutual commentary on LinkedIn, et cetera.
9
00:00:20,988 --> 00:00:22,530
I think that's how you and I met.
10
00:00:22,530 --> 00:00:23,081
That's right.
11
00:00:23,081 --> 00:00:23,302
Yeah.
12
00:00:23,302 --> 00:00:28,667
And I think you had a, did you have an article on, I think AI hallucinations?
13
00:00:28,667 --> 00:00:31,886
And I read that I was like, I got to get her on the podcast.
14
00:00:31,886 --> 00:00:41,666
Yes, I've been kind of obsessed with tracking lawyers and pro se litigants and experts and
law firms who have run into some trouble with using AI.
15
00:00:41,726 --> 00:00:46,966
I've been trying to, you know, I'm a professor, so I've been trying to educate my students
on what to do and what not to do.
16
00:00:46,966 --> 00:00:51,326
So I decided to write a blog piece on all the hallucination cases.
17
00:00:51,326 --> 00:00:55,278
And we're up to like 39 cases now, which is pretty incredible.
18
00:00:55,278 --> 00:00:56,438
That's unbelievable.
19
00:00:56,838 --> 00:00:57,938
Well, that's a good segue.
20
00:00:57,938 --> 00:01:01,538
You were starting to tell us a little bit about what it is, what you do.
21
00:01:01,778 --> 00:01:06,054
Why don't you give us a quick introduction and fill us in on that.
22
00:01:06,058 --> 00:01:07,819
Sure, so I'm a law professor.
23
00:01:07,819 --> 00:01:09,479
I'm a professor of legal writing.
24
00:01:09,479 --> 00:01:12,441
I teach at New York Law School in Manhattan.
25
00:01:12,601 --> 00:01:14,662
But I was a litigator for 20 years.
26
00:01:14,662 --> 00:01:16,583
I was a construction lawyer.
27
00:01:16,583 --> 00:01:21,098
I went straight through from undergrad to law school and really had no idea what I was
getting into.
28
00:01:21,098 --> 00:01:27,138
But I landed a job at a construction law firm and then ended up doing that for my entire
litigation career.
29
00:01:27,138 --> 00:01:31,256
But I've been teaching law, teaching legal writing as my specialty for...
30
00:01:31,256 --> 00:01:38,134
the past, wow, 16 years, and I write books about well-being and performance issues for law
students and lawyers.
31
00:01:38,584 --> 00:01:41,136
Well, those are all very relevant topics.
32
00:01:41,136 --> 00:01:47,440
Well-being in the, especially in the big law world is a topic of conversation frequently.
33
00:01:47,440 --> 00:02:01,460
And I know there was a, I won't say the name of the firm, but there was a, I don't know if
it was a policy or a memo or something that came out from a big law firm where, know,
34
00:02:01,460 --> 00:02:05,592
basically said, look, this, your, your job is your top priority.
35
00:02:05,592 --> 00:02:07,394
Your personal life comes second.
36
00:02:07,394 --> 00:02:08,194
And
37
00:02:08,332 --> 00:02:10,433
You know, it has led to so much burnout.
38
00:02:10,433 --> 00:02:13,196
mean, we've had lawyers working here for us.
39
00:02:13,196 --> 00:02:16,730
You see so many that move into KM for work-life balance.
40
00:02:16,730 --> 00:02:27,939
And then you've got AI, which is another super hot topic and you being focused on the
writing aspect of legal work.
41
00:02:28,400 --> 00:02:30,782
There's lots to talk about in your world.
42
00:02:31,022 --> 00:02:35,522
Last to talk about, yes, AI kind of waltzed into my legal writing classroom.
43
00:02:35,522 --> 00:02:43,962
I think it was the spring, February of 23, my students told me about it in the middle of
class and I did the riskiest thing a professor can do in class.
44
00:02:43,962 --> 00:02:46,322
And I was like, show me right now.
45
00:02:46,322 --> 00:02:51,702
we gave the, we gave Chad GPT an assignment that I had just given my students.
46
00:02:51,702 --> 00:02:55,082
And I'm watching this thing in real time happen on the screen.
47
00:02:55,082 --> 00:02:57,862
And I thought, oh my gosh, I'm out of a job.
48
00:02:58,022 --> 00:02:59,342
But you know, that kind of
49
00:02:59,342 --> 00:03:12,622
introduced me to the concepts and when my students and I dug into the actual product that
the tool had generated, we gave it a 45 out of 100 in terms of my grading rubric.
50
00:03:12,622 --> 00:03:17,182
But I decided to go all in on, I'm not really a tech person at all.
51
00:03:17,182 --> 00:03:26,060
I'm kind of a Luddite when it comes to tech, when it comes to AI and AI and legal, I've
just decided to go completely all in.
52
00:03:26,060 --> 00:03:33,314
and embraced it and designed classes and coursework and workshops so I can understand how
it works and also teach my students.
53
00:03:33,314 --> 00:03:34,095
Yeah.
54
00:03:34,095 --> 00:03:35,696
Well, it's a very relevant topic.
55
00:03:35,696 --> 00:03:55,375
mean, there's so much chatter in cyberspace these days about how lawyers are trained and
how historically clients have subsidized the training of young lawyers and how different
56
00:03:55,375 --> 00:03:59,192
this model will be if lower level legal work.
57
00:03:59,192 --> 00:04:01,245
gets automated to some extent.
58
00:04:01,245 --> 00:04:08,244
And it sounds like you guys are ahead of the curve at New York Law School in terms of
preparing for that.
59
00:04:09,096 --> 00:04:19,589
trying to embrace still teaching and instilling the students really strong fundamental
skills in research and writing because my issue with AI right now, the way it's being kind
60
00:04:19,589 --> 00:04:23,670
of touted out there in the media is all this pressure to be fast.
61
00:04:23,670 --> 00:04:28,552
Every ad is about accelerate, accelerate, expedite.
62
00:04:28,552 --> 00:04:36,716
But a law student or even a lawyer one year out of law school still doesn't have that
level of substantive expertise to know
63
00:04:36,716 --> 00:04:41,950
or differentiate good AI output from bad or mediocre AI output.
64
00:04:41,950 --> 00:04:54,219
So we're still trying to really focus on teaching really solid fundamental legal research
and writing skills and layering on getting comfortable using these skills, using the AI
65
00:04:54,219 --> 00:04:55,199
tools.
66
00:04:55,199 --> 00:05:04,166
So they're able to do what I like to call output discernment, like discern whether
something that AI can produce, yes, in 10 seconds, whether that's worth it or not, is it
67
00:05:04,166 --> 00:05:05,004
good?
68
00:05:05,004 --> 00:05:07,255
And a lot of times it's not.
69
00:05:07,756 --> 00:05:10,809
Again, I'm all in on these tools, but we have to be realistic.
70
00:05:10,809 --> 00:05:17,384
And just because something can do something quickly doesn't mean that velocity equals
quality.
71
00:05:17,384 --> 00:05:22,268
And so just kind of trying to teach those things and still those skills in our students.
72
00:05:22,268 --> 00:05:30,354
So when they're out there and the law firms are expecting them to know how to use these
tools, the students, first of all, aren't seeing them for the first time, but second, are
73
00:05:30,354 --> 00:05:34,798
able to use them ethically and responsibly and not end up being one of the
74
00:05:34,798 --> 00:05:43,042
39 or 40 cases out there that we're seeing where lawyers, well-educated lawyers are
running into trouble using these tools.
75
00:05:43,042 --> 00:05:54,329
Yeah, well, that's a great segue into this AI hallucination of legal cases and making its
way all the way in front of the court repeatedly.
76
00:05:54,490 --> 00:06:11,031
There was obviously the big New York case that the case wasn't big, but the situation was
where a hallucination had occurred and a false citation from something that I believe was
77
00:06:11,031 --> 00:06:12,522
completely made up.
78
00:06:12,532 --> 00:06:18,065
And, you know, the poor lawyer there, mean, AI was still new.
79
00:06:18,065 --> 00:06:26,390
Somebody had to be first, right, to get kind of nailed on this for not going back and
double checking.
80
00:06:26,390 --> 00:06:36,455
But yeah, that was what caught my eye about the writing that you did was just the sheer
volume and how this trend has continued.
81
00:06:38,216 --> 00:06:39,747
was it 37 cases?
82
00:06:39,747 --> 00:06:42,188
How many cases out there have had this issue?
83
00:06:42,198 --> 00:06:49,802
Yes, so I started off reading that first case, Mata versus Avianca, but then there was
another case a months later and another case.
84
00:06:49,802 --> 00:07:00,777
And right now by my tally, and I'll explain how others are finding other cases, I think I
have 14 cases in which lawyers have gotten in trouble for using AI without checking and
85
00:07:00,777 --> 00:07:02,167
verifying the sites.
86
00:07:02,167 --> 00:07:05,729
And the cases call it hallucinated cases, fictitious.
87
00:07:05,729 --> 00:07:08,558
A most recent case called it phantom cases.
88
00:07:08,558 --> 00:07:09,558
fake cases.
89
00:07:09,558 --> 00:07:14,498
if anybody out there is trying to research these cases, use all of those synonyms.
90
00:07:14,698 --> 00:07:25,558
But then what's also shocking is that, or I think surprising and alarming, is that pro se
litigants, litigants who are representing themselves without lawyers, and a lot of people
91
00:07:25,558 --> 00:07:31,258
are saying AI is great for access to justice and people not needing to hire a lawyer.
92
00:07:31,298 --> 00:07:36,462
But pro se litigants, at least 12 by my count, have also submitted
93
00:07:36,462 --> 00:07:40,645
court filings, either complaints or pleadings or briefs.
94
00:07:40,645 --> 00:07:53,153
And that is causing a burden on the court personnel and opposing counsel to research those
cases, spend time figuring out that the cases don't exist, pointing them out to the pro se
95
00:07:53,153 --> 00:07:55,955
litigant, and then the judge who...
96
00:07:56,115 --> 00:08:05,652
Those cases say that the courts exercise what they call special solicitude, or they're a
little lenient on litigants who don't have lawyers, but they have to remind them, hey, you
97
00:08:05,652 --> 00:08:06,136
can't...
98
00:08:06,136 --> 00:08:06,616
do this.
99
00:08:06,616 --> 00:08:10,318
If you do this again, we're going to consider imposing sanctions.
100
00:08:10,318 --> 00:08:15,811
And some of the courts have imposed pretty significant sanctions on even pro se litigants.
101
00:08:15,811 --> 00:08:18,412
And then I'll tell you kind of two other categories.
102
00:08:19,033 --> 00:08:23,155
One law firm just keeps doubling down.
103
00:08:23,656 --> 00:08:29,719
It's a law firm filing cases in New York against the New York Department of Education.
104
00:08:29,719 --> 00:08:35,872
And they've won the main case, and they're entitled to their attorney's fees under this
statute.
105
00:08:36,408 --> 00:08:42,822
But they keep using ChatGPT to calculate their fee request or to like support their fee
requests.
106
00:08:42,822 --> 00:08:44,784
And they've done this eight times.
107
00:08:44,784 --> 00:08:58,823
And eight times the judges, different judges in New York, but different judges have said,
we're not accepting this fee request based on ChatGPT's calculations because in ChatGPT's
108
00:08:58,823 --> 00:09:00,364
current state.
109
00:09:00,448 --> 00:09:06,243
it's not reliable as a source for this information, but that law firm did that eight
times.
110
00:09:06,243 --> 00:09:14,550
So if I were one of those judges, I'd be kind of annoyed, but at least I guess they're
persistent and zealously representing their client.
111
00:09:14,550 --> 00:09:19,473
And then one more category I'll point out, one expert witness.
112
00:09:19,494 --> 00:09:24,717
there's only two experts that I've come across so far, but one in an actual case.
113
00:09:25,218 --> 00:09:27,839
think this was also, I think this was in a state case.
114
00:09:27,839 --> 00:09:29,922
It was someone objecting to
115
00:09:29,986 --> 00:09:32,898
the way a will or an estate was being administered.
116
00:09:32,898 --> 00:09:41,892
And the expert witness tried to use an AI tool to calculate the value of real estate.
117
00:09:41,892 --> 00:09:46,435
And again, the judge was basically was like, there's no backup.
118
00:09:46,435 --> 00:09:51,938
This doesn't meet the standard for admissibility of expert testimony.
119
00:09:51,938 --> 00:09:53,426
There's actual rules.
120
00:09:53,426 --> 00:09:55,700
In some states, it's called the Frye standard.
121
00:09:55,700 --> 00:10:00,066
In other states, people follow the Dober or the Daubert standard.
122
00:10:00,066 --> 00:10:10,180
But this expert hadn't followed any of those standards and had just tried to submit this
expert opinion, this expert report based on AI and the judge was kind of having none of
123
00:10:10,180 --> 00:10:10,880
it.
124
00:10:11,222 --> 00:10:14,966
So those are the four categories that I've come across so far.
125
00:10:14,966 --> 00:10:24,251
Yeah, so AI has actually made some strides recently in regard to calculation of numbers.
126
00:10:24,272 --> 00:10:36,459
I've fallen down a few rabbit holes in AI, and one of them was recently I learned about
tokenizers and how tokenizers translate text into tokens.
127
00:10:36,459 --> 00:10:41,722
And if you look at how it originally tokenized numbers,
128
00:10:41,806 --> 00:10:57,406
Um, it was problematic for arithmetic operations and, um, they've now, uh, kind of change
strategies around that, but it's just, I'm sure you've seen the, um, there was a, I'll
129
00:10:57,406 --> 00:11:07,406
call it a meme going around about if you went to chat GPT, maybe, I don't know, six, eight
months ago and asked how many, how many times is the word, the letter R and strawberry, it
130
00:11:07,406 --> 00:11:11,110
would say three or one or, um,
131
00:11:11,220 --> 00:11:15,392
or four, would just give kind of inconsistent numbers.
132
00:11:15,392 --> 00:11:25,256
you know, that's, that's one of the challenges with, with, with AI is AI is essentially a
mathematical representation of its training data.
133
00:11:25,256 --> 00:11:26,536
That's really all it is.
134
00:11:26,536 --> 00:11:26,916
Right.
135
00:11:26,916 --> 00:11:38,351
And it, it can perform some absolute magical tasks as a result of this that are really
above the understanding level of even the engineers who've built these systems.
136
00:11:38,351 --> 00:11:40,502
It's really, um,
137
00:11:40,502 --> 00:11:52,677
It's really impressive what output they're able to come up with, but it's a bit of a black
box and it is not deterministic and hallucinations, even when you dial temperature down to
138
00:11:52,677 --> 00:11:56,249
zero are very difficult to control.
139
00:11:56,249 --> 00:11:59,310
there's, really have to be cautious.
140
00:11:59,310 --> 00:12:09,454
And it sounds like these are trained lawyers who understand the importance of properly
citing
141
00:12:09,454 --> 00:12:19,743
cases and yet they're still doing this after, you know, the attorney in New York was made
the poster child for bad practice.
142
00:12:19,743 --> 00:12:21,385
Like how is this happening?
143
00:12:21,385 --> 00:12:23,326
How is this continuing to happen?
144
00:12:23,340 --> 00:12:28,083
Yeah, it boggles the mind, honestly, because these cases are in the news.
145
00:12:28,083 --> 00:12:37,318
They're written judicial opinions that you can find on all the legal research databases,
but also they're in articles, they're in the newspapers.
146
00:12:37,358 --> 00:12:40,430
And to me, it's surprising that this is still happening.
147
00:12:40,430 --> 00:12:49,805
And I feel like on the pro se side, I can kind of understand that the average citizen who
hasn't gone to law school and doesn't read like legal journals and legal blogs might not
148
00:12:49,805 --> 00:12:51,266
know this is happening.
149
00:12:51,266 --> 00:12:59,851
And we as a profession should probably do a better job of educating the general public
about these use cases for law.
150
00:12:59,851 --> 00:13:01,822
But lawyers, I we have a duty.
151
00:13:01,822 --> 00:13:12,759
The ethics obligations and the rules of professional conduct, we have a duty to, quote,
stay abreast of changes, technological changes, the benefits and risks.
152
00:13:12,759 --> 00:13:19,136
So a lot of judges across the country are trying to get ahead of this and issue
153
00:13:19,136 --> 00:13:20,587
standing orders.
154
00:13:20,587 --> 00:13:32,133
Bloomberg and Lexis both have individual trackers of all that and they keep adding to this
these databases and spreadsheets of judges across the country, federal and state court,
155
00:13:32,133 --> 00:13:41,198
who have taken the time to add to or issue new rules in their their chambers in their
courts related to AI.
156
00:13:41,198 --> 00:13:42,342
It's been really interesting.
157
00:13:42,342 --> 00:13:48,972
I've been having my students read a lot of these judges standing rules and understand to
certain the differences.
158
00:13:48,992 --> 00:13:52,613
and see certain judges are most concerned about hallucinations.
159
00:13:52,613 --> 00:14:06,909
And so those judges are requiring anybody who files anything in their court to certify
that every citation in that brief or pleading or whatever is accurate and stands for the
160
00:14:06,909 --> 00:14:09,390
proposition for which it's being stated.
161
00:14:09,390 --> 00:14:18,286
Other judges are concerned about things like confidentiality and making sure that lawyers
have an inadvertently disclosed attorney client
162
00:14:18,286 --> 00:14:24,726
privileged information by using a public AI tool that's been training on that information.
163
00:14:24,726 --> 00:14:37,062
So different judges across the country have focused on different things, but all of them
have been trying to raise awareness about our ethical obligation when we sign our names on
164
00:14:37,062 --> 00:14:38,749
documents that we submit to court.
165
00:14:38,749 --> 00:14:42,162
I mean, the famous federal rule 11.
166
00:14:42,162 --> 00:14:46,343
Roll of Civil Procedure we're signing that we have checked everything.
167
00:14:46,343 --> 00:14:57,307
And so now there's an added layer, an added duty that we have to undertake whenever we're
engaging with these tools that might be baked into whatever software we're using.
168
00:14:57,307 --> 00:15:08,000
we have to, I kind of feel like this is an example of why all this pressure to be fast,
accelerate and expedite, it's a little premature in my opinion.
169
00:15:08,000 --> 00:15:09,562
feel like we don't,
170
00:15:09,562 --> 00:15:11,994
We shouldn't be focused so much on being fast.
171
00:15:11,994 --> 00:15:13,905
We should focus on being right.
172
00:15:13,905 --> 00:15:26,862
And so if it takes us an extra hour or two or et cetera to check, we have to undertake
that extra step or get someone in our firms, in our offices to be the checker.
173
00:15:26,862 --> 00:15:35,348
It doesn't have to always be the same person, but we need to incorporate the checking into
our protocol, our work protocols and our writing workflow.
174
00:15:35,478 --> 00:15:47,270
Isn't there an implicit certification when you submit a instrument to the court that it's
you've done these things that so they want a separate certification.
175
00:15:47,506 --> 00:15:50,386
So rule 11 is the one that we talk about a lot.
176
00:15:50,386 --> 00:16:02,166
And when we sign a pleading or sign a brief, we are certifying that it's being submitted
for a proper purpose, not for an improper purpose, not to delay or harass the other side.
177
00:16:02,386 --> 00:16:11,618
that we've, it's, I'm blanking on all the exact language, but that it's based in law and
it's based in the facts that are part of the case.
178
00:16:11,618 --> 00:16:14,859
But now we actually have another duty.
179
00:16:14,859 --> 00:16:20,340
A lot of experts are saying we don't necessarily need to change our rules.
180
00:16:20,340 --> 00:16:23,641
Our rules can encompass these changes in technology.
181
00:16:23,641 --> 00:16:36,265
But we need to raise awareness that part of that signature process now is double checking
that the citations, the statutes, the cases, the regulations that we're citing were not
182
00:16:36,265 --> 00:16:40,236
just fabricated by AI that they actually exist.
183
00:16:40,264 --> 00:16:43,474
and that they stand for what we're citing them for.
184
00:16:43,474 --> 00:16:49,300
I a lot of the legal AI tools are saying, we reduce hallucinations.
185
00:16:49,434 --> 00:17:00,289
But respectfully, I will say, as a teacher, I've been testing out those tools on rules and
laws that I know really well because I've been teaching them for so long.
186
00:17:00,289 --> 00:17:07,384
And I know the cases that should pop up immediately that are most on point and mandatory.
187
00:17:07,384 --> 00:17:10,616
precedent, not just persuasive authority.
188
00:17:10,756 --> 00:17:20,293
And even though the cases that I'm getting from these legal tools exist, they're not
actually the most on point or the most current.
189
00:17:20,293 --> 00:17:30,751
don't actually, the summaries of these cases don't always hit all the required elements of
the rule, which are different from factors that a court might weigh.
190
00:17:30,751 --> 00:17:35,714
So we're getting there, but I think we still have some work to do.
191
00:17:35,714 --> 00:17:36,255
Yeah.
192
00:17:36,255 --> 00:17:43,740
So what are like best practices for supervising junior lawyers and staff using these
GEN.AI tools?
193
00:17:45,442 --> 00:17:47,954
Well, I'm so glad you mentioned staff too.
194
00:17:47,954 --> 00:17:53,868
I we have, I think our entire workplace communities need to be educated in these tools.
195
00:17:54,088 --> 00:18:06,658
And in my role as a supervisor of law students and the mentor to law students, the way I'm
approaching it is to kind of set up, when it comes to research, I think we need to
196
00:18:06,658 --> 00:18:12,934
approach research like we would any project, not just taking what's given to us at first
glance.
197
00:18:12,934 --> 00:18:22,598
I like to explain it like breadcrumb trails, teaching our students that if you're going to
research something, sure, you can start with AI, but you actually need to do the same
198
00:18:22,598 --> 00:18:28,381
search in what we call terms and connectors searches, like the Boolean searches.
199
00:18:28,381 --> 00:18:32,442
Do the same search with natural language, like Google-style searches.
200
00:18:32,442 --> 00:18:36,264
Try the same search on two different legal research platforms.
201
00:18:36,304 --> 00:18:41,442
Then, if you're getting the same pool of cases, you can feel comfortable that you have the
right
202
00:18:41,442 --> 00:18:46,944
body of law and you can check and make sure it's the most up to date and the most
accurate.
203
00:18:46,944 --> 00:18:56,328
But just relying on AI to do it, just because it's quick, that's not responsible in my
opinion now.
204
00:18:56,348 --> 00:19:02,171
When I was a practicing attorney, I always used to worry about whether I'd miss something.
205
00:19:02,171 --> 00:19:06,663
And so these are just the same techniques I would use back then before AI even started.
206
00:19:06,663 --> 00:19:09,614
would start a research trail from
207
00:19:09,698 --> 00:19:12,320
And I like the breadcrumb approach.
208
00:19:12,320 --> 00:19:15,773
Start a breadcrumb trail from five different starting points.
209
00:19:15,773 --> 00:19:28,123
And if you end up with the same pool of cases by starting with AI, doing a natural
language search, doing terms and connectors, trying two different platforms, then you know
210
00:19:28,123 --> 00:19:29,964
you've got the right pool of cases.
211
00:19:29,998 --> 00:19:31,078
Gotcha.
212
00:19:33,139 --> 00:19:45,524
speaking of using AI, I think that I just saw a study maybe this morning about, um, and I
didn't, the outcome of this study, and it may have been informal.
213
00:19:45,524 --> 00:19:59,830
I don't know that it was like a peer reviewed academic study, but it, it somehow measured
the critical thinking capacity of kids who are using gen AI in writing.
214
00:20:00,154 --> 00:20:15,435
and actually in different capacities and found that those that are relying on these tools
have a reduced capacity, which for me, the first thing that went through my head is
215
00:20:15,435 --> 00:20:16,706
they're using it wrong.
216
00:20:16,706 --> 00:20:20,808
If you're relying, AI should augment what you do.
217
00:20:20,808 --> 00:20:24,401
I use it all the time for critical thinking processes.
218
00:20:24,401 --> 00:20:29,234
I have a co-CEO custom GPT.
219
00:20:29,336 --> 00:20:36,326
that I've uploaded all sorts of like our core values, our ideal customer profile
information.
220
00:20:36,326 --> 00:20:45,010
We have a financial deck with all our finance information, our pitch deck that we used to
basically describe what we do.
221
00:20:45,010 --> 00:20:46,911
And I use it to brainstorm ideas.
222
00:20:46,911 --> 00:20:51,572
Like we had a, we have it, have an investor who challenged us.
223
00:20:51,572 --> 00:20:53,613
We're having amazing growth right now.
224
00:20:53,613 --> 00:20:58,194
And he said, what if you, what if you doubled or tripled your marketing budget?
225
00:20:58,194 --> 00:20:59,454
How would you,
226
00:20:59,598 --> 00:21:01,398
how would you spend that money?
227
00:21:01,578 --> 00:21:03,418
And I was like, wow, I don't know.
228
00:21:03,418 --> 00:21:13,058
So first thing I did is I went to my co CEO GPT and I put in, I had all the budget line
items documented and said, what else if we were to double or triple?
229
00:21:13,058 --> 00:21:25,878
And I got great ideas, but I had to filter through them and it was just a, you know, it
was a shotgun and I needed to pick the pieces out that were valuable.
230
00:21:25,878 --> 00:21:27,018
it's,
231
00:21:27,246 --> 00:21:33,906
How do we embrace Gen.ai and not lose our, I heard you use the term writer's identity at
one point.
232
00:21:33,906 --> 00:21:34,760
How do we do that?
233
00:21:34,760 --> 00:21:35,610
Yes.
234
00:21:35,740 --> 00:21:38,321
my gosh, so many things I want to respond to what you just said.
235
00:21:38,321 --> 00:21:52,167
First, on the kids study, I have been encouraging my law students not to use AI as a
substitute for their own critical thinking, but instead, like you said, to kind of help be
236
00:21:52,167 --> 00:21:54,908
a supplement or help enhance their creativity.
237
00:21:54,908 --> 00:22:04,532
But on the critical thinking part, I've been using, so Khan Academy, which I never used
when I was in high school or college, but a lot of my students have.
238
00:22:04,590 --> 00:22:07,410
They have an AI tool called Conmigo.
239
00:22:07,410 --> 00:22:11,170
It's a play on the Spanish for come with or with me, I think.
240
00:22:11,170 --> 00:22:12,970
That's their AI tool.
241
00:22:13,050 --> 00:22:19,290
It is so awesome because it's set up like a Socratic tool where it doesn't just give you
the answer.
242
00:22:19,290 --> 00:22:22,930
It actually helps you critically think through problems.
243
00:22:22,930 --> 00:22:25,430
And here's a little hint to the lawyers out there.
244
00:22:25,430 --> 00:22:28,710
If you click on the humanities button, it knows law.
245
00:22:29,230 --> 00:22:34,434
So while it's set up for like K through 12, I think it can be useful in law school.
246
00:22:34,434 --> 00:22:46,264
And to your point about students or younger people using this, but maybe skipping over the
critical thinking, there's a writing tutor through Conmigo, but it won't just write it for
247
00:22:46,264 --> 00:22:46,644
you.
248
00:22:46,644 --> 00:22:54,641
It asks you questions and like a Socratic tutor makes you have to think critically through
what you wanna write about.
249
00:22:54,641 --> 00:23:00,015
And then you write about it and then it asks you to think critically about how you wanna
improve it.
250
00:23:00,015 --> 00:23:03,942
So I think in education and legal education,
251
00:23:03,942 --> 00:23:09,343
using AI tools that are set up and designed to be more like your GPT.
252
00:23:09,343 --> 00:23:13,544
Not just give you the answers, but to make you think.
253
00:23:13,925 --> 00:23:22,067
And then there's those prompting techniques, which I'm not an expert at these, but tree of
thought, where you ask the AI to do a tree of thought prompt.
254
00:23:22,067 --> 00:23:33,270
And if you're brainstorming different solutions to problems, it can go down three
different paths for solutions to problems or brainstorming or difference of
255
00:23:33,270 --> 00:23:41,175
of opinions, like lawyers can use it to debate different points of view, or do counter
arguments and arguments back and forth.
256
00:23:41,175 --> 00:23:50,320
And then the chain of thought prompt technique, which is show your work, kind of step by
step moving through a critical thinking analysis.
257
00:23:50,320 --> 00:24:01,634
So I think if we can incorporate and educate all of us on how not to use these tools just
to outsource our thinking, because of course, we're going to atrophy, we're not going to
258
00:24:01,634 --> 00:24:03,655
learning, we're going to go backwards.
259
00:24:03,655 --> 00:24:09,358
But if we can set it up so it's pushing us harder, it's leveling us up.
260
00:24:09,358 --> 00:24:12,500
I identify as a writer, first and foremost.
261
00:24:12,500 --> 00:24:15,241
I love the concept of writer identity.
262
00:24:15,822 --> 00:24:25,048
When I fill out forms, I travel a lot, so when I fill out forms for, I don't know,
immigration or whatever, and I have to put my occupation, I put writer.
263
00:24:25,048 --> 00:24:29,592
Way before my, quote, fancier titles of lawyer and professor, because I'd
264
00:24:29,592 --> 00:24:32,484
deeply in my soul identify as a writer.
265
00:24:32,565 --> 00:24:35,947
So I could be really threatened by AI, right?
266
00:24:36,008 --> 00:24:37,769
But I'm not, I love it.
267
00:24:37,769 --> 00:24:40,101
I use it as a super thesaurus.
268
00:24:40,101 --> 00:24:51,542
Things I can't do with a traditional dictionary thesaurus, I can ask it question, I can
have it help me think of the perfect, give me 10 examples of this verb that I'm trying to
269
00:24:51,542 --> 00:24:53,443
capture this tone with.
270
00:24:53,443 --> 00:24:56,585
So I think there's ways like that that we can.
271
00:24:56,834 --> 00:25:05,158
you know, figure out first of all who we are, how we already identify as writers, but what
aspects of writing maybe do we need a little help with?
272
00:25:05,158 --> 00:25:08,339
What aspects of writing do we find more tedious?
273
00:25:08,339 --> 00:25:18,624
Use it for things that can help us stay in a flow state more on the aspects of writing or
our work that we love doing and we feel amazing doing.
274
00:25:18,624 --> 00:25:22,758
Ethan Malik who wrote the book, he's a Wharton professor, wrote the book
275
00:25:22,758 --> 00:25:27,099
I'm using that as my textbook for my AI class at school.
276
00:25:27,099 --> 00:25:35,362
He challenges all of us to spend 10 hours using these tools for things that we love or
things that we just think are fun.
277
00:25:35,362 --> 00:25:47,745
And we'll learn so much about how these tools can make us better and level up instead of
just using it to cheat or get around doing our real work.
278
00:25:47,918 --> 00:25:49,807
So that's my take on it.
279
00:25:49,807 --> 00:25:52,768
Yeah, I'm a big fan of Ethan Molyke.
280
00:25:52,768 --> 00:25:56,690
I think he's great.
281
00:25:57,150 --> 00:26:02,112
He's a little more bullish than me on AI's capacity to comprehend.
282
00:26:02,112 --> 00:26:05,013
I'm a little more bearish on that.
283
00:26:05,013 --> 00:26:12,076
There's been several studies that have demonstrated a counter case for AI's ability to
comprehend.
284
00:26:12,076 --> 00:26:16,538
There was one by the Facebook intelligence team.
285
00:26:16,690 --> 00:26:20,392
It's called the GSM 8K symbolic test.
286
00:26:20,392 --> 00:26:25,015
The GSM 8K is grade school math.
287
00:26:25,015 --> 00:26:27,016
There's 8,000 questions.
288
00:26:28,697 --> 00:26:40,564
What this study did was change the test in immaterial ways and present it to AI and then
measure its ability to respond.
289
00:26:40,564 --> 00:26:43,150
Some of the changes were very simple.
290
00:26:43,150 --> 00:26:45,410
I'm going to oversimplify here because
291
00:26:45,410 --> 00:26:53,515
We don't want to take too much time, but you know, Sarah went to the store and got 10
apples and they changed Sarah's name to Lisa.
292
00:26:53,515 --> 00:26:56,997
That little change, depending on the sophistication of the model, right?
293
00:26:56,997 --> 00:27:02,880
It, because again, that was part of their, the, GSM AK battery of tests was part of their
training material.
294
00:27:02,880 --> 00:27:05,461
So first thing it does is defaults to that.
295
00:27:05,522 --> 00:27:13,726
So the symbolic piece is the new part of the GSM AK and it really threw things off
dramatically.
296
00:27:13,774 --> 00:27:19,702
So I'm not bullish on right now AI's ability to comprehend.
297
00:27:20,604 --> 00:27:21,145
He is.
298
00:27:21,145 --> 00:27:23,799
That's my only disagreement with him, though.
299
00:27:23,799 --> 00:27:26,303
think he's a great guy to follow on LinkedIn.
300
00:27:26,303 --> 00:27:27,634
He has great content.
301
00:27:28,462 --> 00:27:31,202
I mean, I'm kind of in the same vein.
302
00:27:31,202 --> 00:27:36,282
I don't think that these tools are ready to replace legal writers.
303
00:27:36,282 --> 00:27:45,370
I as a legal writing professor, I teach that all good legal writing is structured around
rules.
304
00:27:45,370 --> 00:27:51,752
And in legal rules, they can either be element, like a checklist of required elements.
305
00:27:51,793 --> 00:27:54,894
I use the analogy when I teach my students to drive a car.
306
00:27:54,894 --> 00:27:56,394
You know, need a couple things.
307
00:27:56,394 --> 00:28:04,834
You have to have keys, working battery, unless it's like an electric car, keys, working
battery, fuel, and four inflated tires.
308
00:28:04,834 --> 00:28:07,174
If one of those is missing, the car doesn't move.
309
00:28:07,174 --> 00:28:08,594
That's elements.
310
00:28:08,814 --> 00:28:14,954
And then there's fact rules based on factors which a court might weigh, which is like
searching for an apartment.
311
00:28:14,954 --> 00:28:21,494
You think you have a bunch of factors, but you might compromise one or the other if you
had a great location for a cheaper price, whatever.
312
00:28:22,034 --> 00:28:24,832
But AI right now, when I've tried it,
313
00:28:24,832 --> 00:28:28,996
it doesn't understand the difference between an elements rule and a factor-based rule.
314
00:28:28,996 --> 00:28:32,408
But that can be a completely different legal analysis.
315
00:28:32,449 --> 00:28:42,958
So I have found that I've had to teach the AI tool the difference between elements and
factors before it can give me a well-structured legal rule.
316
00:28:42,958 --> 00:28:45,041
So I kind of agree with you.
317
00:28:45,041 --> 00:28:52,256
I mean, this is a different example, obviously, but all this touting out there about speed
and acceleration.
318
00:28:52,710 --> 00:29:00,838
it does not make me faster as a legal writer because the stuff that it gives me right now
is not actually accurate in terms of structure.
319
00:29:00,838 --> 00:29:10,136
And when I'm teaching future lawyers how to write well in the legal space, everything
boils down to structure of the rule.
320
00:29:10,136 --> 00:29:12,188
The whole thing is based on the rule.
321
00:29:12,228 --> 00:29:18,574
So I think we've got some work to do, but I think we can train these tools to understand
why that's important.
322
00:29:18,817 --> 00:29:28,009
Bad legal, it can do bad legal writing, can write quickly, but that's not gonna solve the
problems that we need to be solving for our clients.
323
00:29:28,009 --> 00:29:31,522
And it's gonna annoy a lot of judges who have to read it.
324
00:29:31,522 --> 00:29:31,982
Yeah.
325
00:29:31,982 --> 00:29:38,356
And you know, I've heard, I've had debates about this here on the podcast and I've heard,
well, it doesn't matter.
326
00:29:38,356 --> 00:29:40,727
And this was more around reasoning.
327
00:29:40,727 --> 00:29:43,228
Comprehension is a precursor to reasoning.
328
00:29:43,228 --> 00:29:49,412
You can't reason your way to an answer if you can't comprehend the problem is my argument.
329
00:29:49,412 --> 00:29:54,684
So I think it does matter and people are, I, know, there's a lot of debate around this.
330
00:29:54,684 --> 00:30:01,270
And I think the reason it's important to understand if these models are reasoning and if
they're comprehending the, the input or the
331
00:30:01,270 --> 00:30:08,513
prompt or the question is because it helps you understand where to use the tool and where
not to.
332
00:30:08,513 --> 00:30:15,696
And it also gives you a lens through which to scrutinize the output.
333
00:30:16,156 --> 00:30:22,239
You should be skeptical today and probably for the foreseeable future.
334
00:30:22,239 --> 00:30:30,232
So I do think it is a relevant debate on whether or not these, because the argument is
335
00:30:30,232 --> 00:30:32,823
Well, we don't know how people reason, right?
336
00:30:32,823 --> 00:30:46,866
The brain is very poorly understood, and it's, and it's inner workings and you know, it's
a collection of neurons firing in a way that generates amazing things, writing and art and
337
00:30:46,866 --> 00:30:49,607
speech and creativity.
338
00:30:49,647 --> 00:31:00,300
And you know, we see some of these things come out of AI, but it's like correlation does
not imply causation is what I go back to just because something
339
00:31:01,303 --> 00:31:04,895
you know, output something that looks similar to something else.
340
00:31:04,895 --> 00:31:07,157
It doesn't mean it's the same driving force.
341
00:31:07,157 --> 00:31:11,400
So yeah, I've had, uh, I've had the debate and continue to have the debate.
342
00:31:11,400 --> 00:31:14,112
It's a relevant topic, whether or not these things reason.
343
00:31:14,112 --> 00:31:21,277
And I think the reasoning, um, terminology is being thrown out there way too early and way
too often.
344
00:31:21,277 --> 00:31:28,842
Um, but you know, I'm, people see it, people see that differently and that, and that's
okay.
345
00:31:29,454 --> 00:31:30,994
Yes, absolutely.
346
00:31:30,994 --> 00:31:31,314
Absolutely.
347
00:31:31,314 --> 00:31:40,434
That's why I kind of like with my students, at least I like the show your work kind of
thing, that chain of thought prompting, because you can't just leap from A to Z without
348
00:31:40,434 --> 00:31:41,614
explaining your reasoning.
349
00:31:41,614 --> 00:31:44,974
You have to walk, and then you start to see the flaws in the reasoning.
350
00:31:44,974 --> 00:31:52,546
If there is a flaw, there's assumption, there's logic leaps, there's flawed assumptions,
false assumptions, et cetera.
351
00:31:52,546 --> 00:31:53,327
Yeah.
352
00:31:53,327 --> 00:31:57,449
So, um, I use a tool, it's a custom GPT.
353
00:31:57,449 --> 00:31:59,981
It's fairly new in my tool belt.
354
00:31:59,981 --> 00:32:03,994
it's called prompt GPT and it helps me write prompts.
355
00:32:03,994 --> 00:32:09,868
But the last time you and I spoke, we talked about like strategies for prompt engineering
in a legal context.
356
00:32:09,868 --> 00:32:15,742
Like what is your, do you have any advice for people that are trying to wrap their heads
around that?
357
00:32:15,788 --> 00:32:16,509
Yes.
358
00:32:16,509 --> 00:32:16,898
my gosh.
359
00:32:16,898 --> 00:32:18,110
This is one of my favorite topics.
360
00:32:18,110 --> 00:32:30,130
I actually just wrote an article on this too, because I found, and this is me being a
little quirky, but I found that my own interaction with AI taught me how to be a better
361
00:32:30,130 --> 00:32:39,157
communicator to human beings in terms of prompting, if I needed them to do something, like
if I'm supervising someone or being a mentor.
362
00:32:39,157 --> 00:32:45,462
So I wrote a little piece about this, but I have learned several great techniques of
prompting.
363
00:32:45,762 --> 00:32:51,556
that are just kind of intuitive in terms of getting good output out of humans too.
364
00:32:51,556 --> 00:33:03,485
one example, I mean, I didn't make this up, but the original prompting engineer gurus were
telling us, give it a lot of context, give it a role, give it context for what tasks
365
00:33:03,485 --> 00:33:05,957
you're gonna give it, give it the task.
366
00:33:05,957 --> 00:33:12,226
If it's a law related task, give it the sort of phase or stage of.
367
00:33:12,226 --> 00:33:18,731
the litigation that you're working on or the stage of the transactional negotiation you're
working on.
368
00:33:18,731 --> 00:33:21,273
So it's context again for the task.
369
00:33:21,273 --> 00:33:25,936
Give it the format you want the output in and then give it the tone or the style.
370
00:33:26,117 --> 00:33:38,566
And then what I think is so fun for people who haven't engaged with this too much yet
already is let it do its thing and then change one of those parameters that you gave it.
371
00:33:38,752 --> 00:33:49,891
and see how it adjusts, like the tone, make something more academic sounding or more
sophisticated sounding or more professional sounding, make it less, make it more humorous.
372
00:33:50,091 --> 00:33:51,783
So that's one thing, give it context.
373
00:33:51,783 --> 00:34:01,221
And kind of tying this back to what I said at the beginning, I feel like when I was an
associate in a law firm and my bosses would give me an assignment, they wouldn't give me
374
00:34:01,221 --> 00:34:02,261
any of that context.
375
00:34:02,261 --> 00:34:04,323
And I had no idea what I was doing.
376
00:34:04,904 --> 00:34:08,056
It was the fake until you make it era of my life, which I
377
00:34:08,056 --> 00:34:09,467
highly do not recommend.
378
00:34:09,467 --> 00:34:11,087
Talk about bad well-being.
379
00:34:11,087 --> 00:34:12,290
I had no idea what I was doing.
380
00:34:12,290 --> 00:34:15,853
Even though I was smart and hardworking, just give me some context.
381
00:34:15,853 --> 00:34:17,834
I could have done such a better job.
382
00:34:18,075 --> 00:34:18,956
Examples.
383
00:34:18,956 --> 00:34:24,570
We might have heard the AI terminology of few shot, one shot, or zero shot.
384
00:34:24,570 --> 00:34:31,166
And that's terminology just that means, you giving it no examples, one example, or more
than one example?
385
00:34:31,567 --> 00:34:35,604
Apparently, the studies show that it does better work if
386
00:34:35,604 --> 00:34:41,258
if you give it an example, unless you want it to be wildly creative and do its own thing.
387
00:34:41,419 --> 00:34:48,164
But if you don't, if you want it to give you something that looks like a document you've
done before, give it examples.
388
00:34:48,805 --> 00:34:59,594
I learned from articles that I've read, I didn't again, didn't make this up, it responds,
these tools respond better to positive instruction rather than negative instruction.
389
00:34:59,594 --> 00:35:01,014
So give it.
390
00:35:01,014 --> 00:35:04,357
positive or concrete affirmative instructions.
391
00:35:04,357 --> 00:35:09,460
Do this, do that, not don't do this or stay away from that.
392
00:35:09,541 --> 00:35:23,873
And again, just kind of being funny, I think as an associate, I reacted better when my
bosses would say, know, highlight this factor or be assertive in this realm, not don't
393
00:35:23,873 --> 00:35:30,414
mention that theory or like the positive and there's science behind this that our brains
take an extra step.
394
00:35:30,414 --> 00:35:35,234
to process negative instructions, which makes us slower and less effective.
395
00:35:35,854 --> 00:35:39,134
Not to make this about me, but I take boxing lessons, for instance.
396
00:35:39,134 --> 00:35:46,674
That really helped me manage my own well-being and performance anxiety and public speaking
anxiety, et cetera.
397
00:35:46,674 --> 00:35:59,170
And I laugh now because my trainer, his name is Lou, when he gives me positive
instructions, like hands up, boxer stance, move your shoulders, move your head, I do it.
398
00:35:59,170 --> 00:36:11,513
But when he says, stop doing that, stop dragging your glove down or stop dropping your
hands, it takes me a beat to process the thing he's telling me not to do.
399
00:36:11,574 --> 00:36:13,694
And then I have to remember what to do.
400
00:36:13,694 --> 00:36:20,756
So I think that is relevant in prompting that if we stick to positive instructions, the
tools function better, apparently.
401
00:36:21,010 --> 00:36:27,328
I also kind of love the studies that have been done about what they call emotional
prompting.
402
00:36:27,552 --> 00:36:34,728
Now, I love interacting with these tools emotionally, because I'm just that kind of person
and that kind of teacher and that kind of writer.
403
00:36:34,728 --> 00:36:42,574
So when I tell it, oh, wow, that's amazing, or you did a great job with that, or I'm kind
of even more casual, I'll be like, you rock.
404
00:36:42,574 --> 00:36:44,036
It'll come back to me.
405
00:36:44,036 --> 00:36:45,069
You rock too.
406
00:36:45,069 --> 00:36:47,278
I love co-creating with you.
407
00:36:47,362 --> 00:36:56,966
So just think about how funny that is, that if we use more positive emotional prompting in
our own supervising, we might get better work product out of her.
408
00:36:57,122 --> 00:37:00,204
supervisees, but apparently it works with AI.
409
00:37:00,204 --> 00:37:04,847
If you give it positive emotional prompting, it works harder.
410
00:37:05,548 --> 00:37:07,449
You mentioned meta prompting.
411
00:37:07,449 --> 00:37:14,473
You didn't call it that, you know, telling it how to prompt, or asking it how we can be a
better prompter.
412
00:37:14,794 --> 00:37:25,521
I think that's amazing, like asking, we can give it a prompt, but then ask that one final
question, that one extra question, how can I, or what else do you need to know to do a
413
00:37:25,521 --> 00:37:27,232
good job with this task?
414
00:37:27,278 --> 00:37:29,158
So I think that's interesting too.
415
00:37:29,158 --> 00:37:36,100
And then we talked about like chain of thought prompting, asking it to just explain things
step by step.
416
00:37:36,100 --> 00:37:46,193
I like tree of thought prompting for lawyering because we are constantly debating
different perspectives on things and asking AI to generate what's called a tree of thought
417
00:37:46,193 --> 00:37:53,565
prompt and come up with almost a dialogue among three different points of view about a
particular issue.
418
00:37:53,565 --> 00:37:56,514
It helps us brainstorm, be more creative.
419
00:37:56,514 --> 00:37:59,355
Think of counter arguments we might not have thought of.
420
00:37:59,355 --> 00:38:01,756
Think of arguments we might not have thought of.
421
00:38:01,756 --> 00:38:06,538
Game out what the other side might be arguing, et cetera.
422
00:38:06,538 --> 00:38:17,703
So I think those are kind of the things that I've learned over, wow, I guess almost two
years now of playing around with these tools and experimenting and make mistakes.
423
00:38:17,883 --> 00:38:25,066
I like that 10-hour challenge that Ethan Molyk put out there because it's not, be perfect
at this immediately.
424
00:38:25,066 --> 00:38:37,677
We got to practice and play around with this and make a ton of mistakes and let it make
mistakes so we can discern what it's good at and what it's not yet good at and not be
425
00:38:37,677 --> 00:38:41,642
frustrated or disappointed when it doesn't understand our instructions.
426
00:38:41,642 --> 00:38:42,622
Yeah.
427
00:38:43,423 --> 00:38:58,597
So you and I talked a little bit about legal research tools and there's a lot of debate on
how well suited today's technology is for AI technology for legal research.
428
00:38:58,597 --> 00:39:08,622
The Stanford study from earlier, well, I guess that was last year now, that came out
highlighted some challenges and they were a little broad.
429
00:39:08,622 --> 00:39:11,844
They were a lot broad actually in their definition of hallucination.
430
00:39:11,844 --> 00:39:19,240
Some things weren't hallucinations that they classified as such like missing information
or like that's not a hallucination.
431
00:39:19,240 --> 00:39:23,774
That's just a incomplete answer essentially.
432
00:39:23,774 --> 00:39:31,930
But you know, there, does seem like there's a shifting paradigm, you know, from
traditional legal research to AI assisted legal research.
433
00:39:31,930 --> 00:39:35,392
Like what does that picture look like from your perspective?
434
00:39:35,896 --> 00:39:48,432
I mean, right now, I honestly feel like I gave it, I tried a querier prompt this morning,
again, just to test out, has it evolved in the last month really since I've been super
435
00:39:48,432 --> 00:39:49,832
focused on this?
436
00:39:50,873 --> 00:40:01,527
And I gave it a pretty easy, what I thought an easy example, like I wanna set up a legal
research assignment for my students about when and whether you can serve a litigant
437
00:40:01,527 --> 00:40:03,018
through social media.
438
00:40:03,018 --> 00:40:07,700
if alternative means of serving pleadings is not available.
439
00:40:07,700 --> 00:40:17,584
And so I asked the tools, you know, find me cases in which litigants have been able to
serve other litigants via Instagram.
440
00:40:17,844 --> 00:40:21,766
It gave me three examples, three cases back very confidently.
441
00:40:21,766 --> 00:40:23,446
I wrote these down so I could tell you.
442
00:40:23,446 --> 00:40:25,307
The first case was not about Instagram.
443
00:40:25,307 --> 00:40:26,948
It was about email.
444
00:40:27,248 --> 00:40:30,229
It was a service case, but it was about email, not Instagram.
445
00:40:30,229 --> 00:40:32,710
And I know Instagram cases exist, by the way.
446
00:40:33,263 --> 00:40:39,269
Then it gave me another case about privacy and social media issues not related to service
at all.
447
00:40:39,269 --> 00:40:46,957
And then it gave me a case, a disciplinary action against a lawyer who did improper things
on social media.
448
00:40:46,957 --> 00:40:49,159
So I didn't get any helpful cases.
449
00:40:49,700 --> 00:40:55,426
Now when I have gently pushed back on some of these tools and said, you know,
450
00:40:55,970 --> 00:40:57,931
That's not a hallucination.
451
00:40:57,931 --> 00:41:01,073
All those cases they gave me exist, but it just didn't help me.
452
00:41:01,073 --> 00:41:03,534
And I know cases exist out there.
453
00:41:04,055 --> 00:41:06,996
The feedback I've gotten is that I'm using it wrong.
454
00:41:07,176 --> 00:41:13,760
But I'm using it as a person with 30 years of legal experience would use it.
455
00:41:13,900 --> 00:41:18,202
And I'm also using it the way a first-year law student would use it.
456
00:41:18,503 --> 00:41:20,964
And I've tried both approaches.
457
00:41:20,964 --> 00:41:24,556
And I still get things like that where I don't find it.
458
00:41:24,556 --> 00:41:25,306
that helpful.
459
00:41:25,306 --> 00:41:32,912
Now maybe that's a quirky research issue, but it's also a legitimate legal research issue
that a lawyer would ask the question.
460
00:41:32,912 --> 00:41:38,415
So again, it of goes back to the advice I mentioned earlier that we can't get frustrated.
461
00:41:38,415 --> 00:41:39,886
I need to learn how to change.
462
00:41:39,886 --> 00:41:41,957
Maybe my prompt wasn't so good.
463
00:41:41,957 --> 00:41:44,919
But then I also want to go back to traditional.
464
00:41:44,919 --> 00:41:49,612
I want to kind ping pong back and forth with traditional legal research.
465
00:41:49,814 --> 00:41:54,497
And then that might give me one case that I can then plug into the AI.
466
00:41:54,497 --> 00:42:05,463
So I can go back and forth between traditional terms and connector searches, natural
language searches, grab onto something that I know is on point there, take that back to
467
00:42:05,463 --> 00:42:10,005
the AI tool, and then kind of feed that in and see what the AI tool gives me.
468
00:42:10,005 --> 00:42:19,084
But we can't, in my opinion right now, as of today, we cannot solely rely on AI legal
research and just
469
00:42:19,084 --> 00:42:21,836
be done with it and say, look how efficient we are.
470
00:42:22,157 --> 00:42:24,240
That's not responsible.
471
00:42:24,240 --> 00:42:31,527
We need to check it against traditional legal research methods using those breadcrumb
techniques that I mentioned earlier.
472
00:42:31,527 --> 00:42:38,344
I think it will get better, obviously, but for now, I would not feel comfortable using it
just on its own.
473
00:42:38,502 --> 00:42:44,747
So how do you maintain that balance between quality and accuracy when you're leveraging
these technologies?
474
00:42:44,747 --> 00:42:47,418
it, I mean, cause it, is it really a time savings?
475
00:42:47,418 --> 00:42:53,593
Like I have seen, I talk a lot about Microsoft co-pilot and I think Microsoft has a lot of
work to do.
476
00:42:53,593 --> 00:42:57,115
In my opinion, it is the internet explorer of AI platforms.
477
00:42:57,115 --> 00:42:58,836
It's not very good.
478
00:42:59,397 --> 00:43:00,868
It does have some advantages.
479
00:43:00,868 --> 00:43:06,822
Privacy is, you know, their terms of service is airtight, their integration.
480
00:43:06,996 --> 00:43:15,751
into the M365 suite is great, but the output compared to what I get with ChatGPT and Claw
just isn't.
481
00:43:15,751 --> 00:43:27,398
So I fall back a lot to those tools, but like how should folks be thinking about
maintaining that balance with quality and accuracy when they're leveraging these new
482
00:43:27,398 --> 00:43:28,206
tools?
483
00:43:28,206 --> 00:43:31,606
Yeah, fall back to Chat GPT is my go-to still.
484
00:43:31,606 --> 00:43:38,726
I try to use the most up-to-date version of Chat GPT, although I haven't done the $200 a
month version yet.
485
00:43:39,586 --> 00:43:44,246
Yeah, I'm with whatever I can do for my 20 bucks a month.
486
00:43:44,246 --> 00:43:51,746
But as a writer, I'm not talking about legal writing right now, but I do a lot of writing
about my books and things like that.
487
00:43:51,746 --> 00:43:55,246
And I've come up with a protocol, which I...
488
00:43:55,246 --> 00:43:59,686
encourage people to balance accuracy, but what was the other word you used?
489
00:43:59,686 --> 00:44:02,466
Accuracy and quality.
490
00:44:03,886 --> 00:44:06,706
It's kind of using ChatGPT.
491
00:44:06,706 --> 00:44:15,146
What I use it for is language or fact checking, which I know I shouldn't use ChatGPT for
fact checking, but I'll ask it a really obscure fact.
492
00:44:15,146 --> 00:44:24,934
I'm writing a travel memoir right now, which has nothing to do with law, but I'll have a
very obscure question I want to ask it about, can I see the Coliseum?
493
00:44:25,176 --> 00:44:27,348
from this particular spot in Rome?
494
00:44:27,348 --> 00:44:30,110
And I think that's kind of a cool question to ask Chachi BT.
495
00:44:30,110 --> 00:44:37,757
Like geographically, because I remember seeing the Colosseum when I was standing in a
spot, but I don't know if I'm misremembering.
496
00:44:37,757 --> 00:44:42,201
So then I'll ask Chachi BT and it'll give me this awesome answer that I want to use.
497
00:44:42,201 --> 00:44:45,583
But then something in the back of my head is like, I need to check that.
498
00:44:45,824 --> 00:44:49,887
And then I checked it and thankfully it was true.
499
00:44:50,408 --> 00:44:53,090
So I think we have to constantly have that.
500
00:44:53,090 --> 00:45:00,850
that bobbing and weaving, that back and forth of getting excited to use these tools
because they could level up our creativity.
501
00:45:00,850 --> 00:45:04,828
I mean, it's made me so, it's enabled me to stay in flow.
502
00:45:04,828 --> 00:45:09,681
just like that expression when I'm writing without getting sidetracked if I'm stuck on a
word.
503
00:45:09,681 --> 00:45:10,642
I love it for that.
504
00:45:10,642 --> 00:45:14,187
I can ask it for 10 words or 20 words or 100.
505
00:45:14,187 --> 00:45:15,785
It doesn't get tired.
506
00:45:16,086 --> 00:45:17,587
But then I have to check it.
507
00:45:17,587 --> 00:45:23,202
If anything in the back of our minds is like, huh, that sounds like it might be too good
to be true or.
508
00:45:23,202 --> 00:45:32,887
sounds slightly off, we just have to have backup protocols and then bounce out of AI into
traditional research, regular Google, right?
509
00:45:32,887 --> 00:45:41,772
Or like, other resource, other your go-to, not that Google is always accurate, obviously,
but other sources to check.
510
00:45:42,152 --> 00:45:53,458
In the law firm world, if you don't have the time or your billing rate is too high for you
to be the checker, establish a role for someone in the law firm to be the checker.
511
00:45:53,810 --> 00:46:05,135
Like right now I'm proofreading my entire book manuscript basically because I like to do
that, but it's very time consuming and I have to change kind of my workflow.
512
00:46:05,395 --> 00:46:17,090
But setting up protocols, setting up checklists, talking about this in our law offices to
make sure that everybody, you mentioned staff earlier, be inclusive, include everybody in
513
00:46:17,090 --> 00:46:22,336
the conversations because we all should be experimenting with these tools and
514
00:46:22,336 --> 00:46:24,509
and not waiting until they're perfect.
515
00:46:24,509 --> 00:46:27,722
It's much better if we just get to know them.
516
00:46:27,722 --> 00:46:29,915
I like to call it shaking hands with.
517
00:46:29,915 --> 00:46:38,145
And let's shake hands with these tools and get to know them, introduce ourselves to them,
let them introduce ourselves, introduce them to us.
518
00:46:38,145 --> 00:46:42,790
And we can probably accomplish great things if we approach it that way.
519
00:46:42,828 --> 00:46:43,398
Yeah.
520
00:46:43,398 --> 00:46:46,420
Well, we're almost out of time and we had so much to talk to you about.
521
00:46:46,420 --> 00:46:55,635
I want to touch on one thing because I thought it was really fascinating when you and I
last spoke and that was like how future lawyers are going to train like athletes and
522
00:46:55,635 --> 00:46:58,038
performers, like expand on that concept.
523
00:46:58,038 --> 00:46:59,539
Okay, I love that concept.
524
00:46:59,539 --> 00:47:11,854
I wish I could go back, you know, 16 years and 20, 30 years and treat myself like an
athlete because, you know, in athletics and in performers like musicians, singers,
525
00:47:11,854 --> 00:47:16,085
dancers, etc., there's not a one size fits all training model.
526
00:47:16,085 --> 00:47:27,404
And unfortunately, I think in the past, know, legal education and legal training has sort
of promoted this one size fits all you have to be this type of person to be a good lawyer.
527
00:47:27,404 --> 00:47:39,899
And I think in the future, especially now that AI is in the mix, if we can all treat
ourselves and have the powers that be treat associates and young lawyers like athletes and
528
00:47:39,899 --> 00:47:41,100
performers.
529
00:47:41,100 --> 00:47:46,332
Athletes and performers don't just focus on the one skill that brings them glory on the
field or on the stage.
530
00:47:46,332 --> 00:47:51,444
They focus on kind of holistic, multi-dimensional performance.
531
00:47:51,444 --> 00:47:57,106
And if they are struggling with an aspect of that performance, they have coaches.
532
00:47:57,142 --> 00:48:00,385
and trainers to help them get better at that.
533
00:48:00,385 --> 00:48:03,568
Even elite athletes struggle, right?
534
00:48:03,568 --> 00:48:05,188
Or they want to improve.
535
00:48:05,188 --> 00:48:14,536
I've read a lot of books by Phil Jackson, know, the famous coach of the Bulls and the
Lakers, I think.
536
00:48:14,577 --> 00:48:19,020
And he talked about really understanding that every athlete is an individual.
537
00:48:19,020 --> 00:48:26,094
And I think if we could start really regarding every law student and lawyer as an
individual with individual strengths.
538
00:48:26,094 --> 00:48:37,014
and individual anxieties and challenges and talk about all that openly instead of kind of
promoting this fake it till you make it or don't show weakness mentality.
539
00:48:37,014 --> 00:48:49,972
I love admitting what I'm not good at because then I can get help and study and learn and
be a lifelong learner and hire a boxing trainer.
540
00:48:49,972 --> 00:48:52,913
in my 50s to help me become an athlete now.
541
00:48:52,913 --> 00:48:59,526
And now I'm able to step into those performance arenas and speak to hundreds, sometimes
thousands of people.
542
00:48:59,526 --> 00:49:06,029
And I never could have done that when I was 25, 30, because I didn't know, I was just
faking it.
543
00:49:06,069 --> 00:49:13,793
So I'm a huge fan of let's all treat each other like athletes and performers and focus on
multi-dimensional fitness.
544
00:49:14,113 --> 00:49:16,314
It's not a one size fits all.
545
00:49:16,984 --> 00:49:17,715
profession.
546
00:49:17,715 --> 00:49:20,858
Let's really understand one another's strengths.
547
00:49:20,858 --> 00:49:22,630
Let's champion each other's strengths.
548
00:49:22,630 --> 00:49:28,647
Let's help each other really level up our performance and actually enjoy it too.
549
00:49:28,714 --> 00:49:30,485
So we can do it long time.
550
00:49:30,485 --> 00:49:35,793
well that's great advice and Phil Jackson, he had a little bit of success in the NBA.
551
00:49:36,356 --> 00:49:38,730
I mean, he won six championships with the Bulls.
552
00:49:38,730 --> 00:49:42,676
I don't know how many he won with the Lakers, but I think he won a few there.
553
00:49:42,830 --> 00:49:44,407
got a book called Eleven Rings.
554
00:49:44,407 --> 00:49:45,834
So he's won at least eleven.
555
00:49:45,834 --> 00:49:48,897
Okay, wow, that's incredible.
556
00:49:48,897 --> 00:49:52,641
Well, you are an absolute pleasure to talk to Heidi.
557
00:49:52,641 --> 00:50:01,930
We missed a whole section of the agenda that we were going to talk through, but I would
seriously love to have you back down the road to continue the conversation.
558
00:50:02,006 --> 00:50:02,637
I would love that.
559
00:50:02,637 --> 00:50:04,250
It's been a pleasure talking to you as well.
560
00:50:04,250 --> 00:50:06,286
It me excited about the future.
561
00:50:06,286 --> 00:50:07,826
Awesome good stuff.
562
00:50:07,826 --> 00:50:11,766
Well listen, have a good rest of your week and we'll we'll chat again soon.
563
00:50:12,606 --> 00:50:14,560
Alright, thank you.
00:00:03,756
Heidi, how are you this afternoon?
2
00:00:03,756 --> 00:00:04,399
I'm great.
3
00:00:04,399 --> 00:00:04,908
How are you?
4
00:00:04,908 --> 00:00:06,220
Thanks so much for having me.
5
00:00:06,220 --> 00:00:08,270
Yeah, I appreciate you being here.
6
00:00:08,270 --> 00:00:15,288
I think you and I might have got connected through Jen Leonard maybe or...
7
00:00:15,288 --> 00:00:15,919
think so.
8
00:00:15,919 --> 00:00:20,988
a huge fan of Jen Leonard and your mutual commentary on LinkedIn, et cetera.
9
00:00:20,988 --> 00:00:22,530
I think that's how you and I met.
10
00:00:22,530 --> 00:00:23,081
That's right.
11
00:00:23,081 --> 00:00:23,302
Yeah.
12
00:00:23,302 --> 00:00:28,667
And I think you had a, did you have an article on, I think AI hallucinations?
13
00:00:28,667 --> 00:00:31,886
And I read that I was like, I got to get her on the podcast.
14
00:00:31,886 --> 00:00:41,666
Yes, I've been kind of obsessed with tracking lawyers and pro se litigants and experts and
law firms who have run into some trouble with using AI.
15
00:00:41,726 --> 00:00:46,966
I've been trying to, you know, I'm a professor, so I've been trying to educate my students
on what to do and what not to do.
16
00:00:46,966 --> 00:00:51,326
So I decided to write a blog piece on all the hallucination cases.
17
00:00:51,326 --> 00:00:55,278
And we're up to like 39 cases now, which is pretty incredible.
18
00:00:55,278 --> 00:00:56,438
That's unbelievable.
19
00:00:56,838 --> 00:00:57,938
Well, that's a good segue.
20
00:00:57,938 --> 00:01:01,538
You were starting to tell us a little bit about what it is, what you do.
21
00:01:01,778 --> 00:01:06,054
Why don't you give us a quick introduction and fill us in on that.
22
00:01:06,058 --> 00:01:07,819
Sure, so I'm a law professor.
23
00:01:07,819 --> 00:01:09,479
I'm a professor of legal writing.
24
00:01:09,479 --> 00:01:12,441
I teach at New York Law School in Manhattan.
25
00:01:12,601 --> 00:01:14,662
But I was a litigator for 20 years.
26
00:01:14,662 --> 00:01:16,583
I was a construction lawyer.
27
00:01:16,583 --> 00:01:21,098
I went straight through from undergrad to law school and really had no idea what I was
getting into.
28
00:01:21,098 --> 00:01:27,138
But I landed a job at a construction law firm and then ended up doing that for my entire
litigation career.
29
00:01:27,138 --> 00:01:31,256
But I've been teaching law, teaching legal writing as my specialty for...
30
00:01:31,256 --> 00:01:38,134
the past, wow, 16 years, and I write books about well-being and performance issues for law
students and lawyers.
31
00:01:38,584 --> 00:01:41,136
Well, those are all very relevant topics.
32
00:01:41,136 --> 00:01:47,440
Well-being in the, especially in the big law world is a topic of conversation frequently.
33
00:01:47,440 --> 00:02:01,460
And I know there was a, I won't say the name of the firm, but there was a, I don't know if
it was a policy or a memo or something that came out from a big law firm where, know,
34
00:02:01,460 --> 00:02:05,592
basically said, look, this, your, your job is your top priority.
35
00:02:05,592 --> 00:02:07,394
Your personal life comes second.
36
00:02:07,394 --> 00:02:08,194
And
37
00:02:08,332 --> 00:02:10,433
You know, it has led to so much burnout.
38
00:02:10,433 --> 00:02:13,196
mean, we've had lawyers working here for us.
39
00:02:13,196 --> 00:02:16,730
You see so many that move into KM for work-life balance.
40
00:02:16,730 --> 00:02:27,939
And then you've got AI, which is another super hot topic and you being focused on the
writing aspect of legal work.
41
00:02:28,400 --> 00:02:30,782
There's lots to talk about in your world.
42
00:02:31,022 --> 00:02:35,522
Last to talk about, yes, AI kind of waltzed into my legal writing classroom.
43
00:02:35,522 --> 00:02:43,962
I think it was the spring, February of 23, my students told me about it in the middle of
class and I did the riskiest thing a professor can do in class.
44
00:02:43,962 --> 00:02:46,322
And I was like, show me right now.
45
00:02:46,322 --> 00:02:51,702
we gave the, we gave Chad GPT an assignment that I had just given my students.
46
00:02:51,702 --> 00:02:55,082
And I'm watching this thing in real time happen on the screen.
47
00:02:55,082 --> 00:02:57,862
And I thought, oh my gosh, I'm out of a job.
48
00:02:58,022 --> 00:02:59,342
But you know, that kind of
49
00:02:59,342 --> 00:03:12,622
introduced me to the concepts and when my students and I dug into the actual product that
the tool had generated, we gave it a 45 out of 100 in terms of my grading rubric.
50
00:03:12,622 --> 00:03:17,182
But I decided to go all in on, I'm not really a tech person at all.
51
00:03:17,182 --> 00:03:26,060
I'm kind of a Luddite when it comes to tech, when it comes to AI and AI and legal, I've
just decided to go completely all in.
52
00:03:26,060 --> 00:03:33,314
and embraced it and designed classes and coursework and workshops so I can understand how
it works and also teach my students.
53
00:03:33,314 --> 00:03:34,095
Yeah.
54
00:03:34,095 --> 00:03:35,696
Well, it's a very relevant topic.
55
00:03:35,696 --> 00:03:55,375
mean, there's so much chatter in cyberspace these days about how lawyers are trained and
how historically clients have subsidized the training of young lawyers and how different
56
00:03:55,375 --> 00:03:59,192
this model will be if lower level legal work.
57
00:03:59,192 --> 00:04:01,245
gets automated to some extent.
58
00:04:01,245 --> 00:04:08,244
And it sounds like you guys are ahead of the curve at New York Law School in terms of
preparing for that.
59
00:04:09,096 --> 00:04:19,589
trying to embrace still teaching and instilling the students really strong fundamental
skills in research and writing because my issue with AI right now, the way it's being kind
60
00:04:19,589 --> 00:04:23,670
of touted out there in the media is all this pressure to be fast.
61
00:04:23,670 --> 00:04:28,552
Every ad is about accelerate, accelerate, expedite.
62
00:04:28,552 --> 00:04:36,716
But a law student or even a lawyer one year out of law school still doesn't have that
level of substantive expertise to know
63
00:04:36,716 --> 00:04:41,950
or differentiate good AI output from bad or mediocre AI output.
64
00:04:41,950 --> 00:04:54,219
So we're still trying to really focus on teaching really solid fundamental legal research
and writing skills and layering on getting comfortable using these skills, using the AI
65
00:04:54,219 --> 00:04:55,199
tools.
66
00:04:55,199 --> 00:05:04,166
So they're able to do what I like to call output discernment, like discern whether
something that AI can produce, yes, in 10 seconds, whether that's worth it or not, is it
67
00:05:04,166 --> 00:05:05,004
good?
68
00:05:05,004 --> 00:05:07,255
And a lot of times it's not.
69
00:05:07,756 --> 00:05:10,809
Again, I'm all in on these tools, but we have to be realistic.
70
00:05:10,809 --> 00:05:17,384
And just because something can do something quickly doesn't mean that velocity equals
quality.
71
00:05:17,384 --> 00:05:22,268
And so just kind of trying to teach those things and still those skills in our students.
72
00:05:22,268 --> 00:05:30,354
So when they're out there and the law firms are expecting them to know how to use these
tools, the students, first of all, aren't seeing them for the first time, but second, are
73
00:05:30,354 --> 00:05:34,798
able to use them ethically and responsibly and not end up being one of the
74
00:05:34,798 --> 00:05:43,042
39 or 40 cases out there that we're seeing where lawyers, well-educated lawyers are
running into trouble using these tools.
75
00:05:43,042 --> 00:05:54,329
Yeah, well, that's a great segue into this AI hallucination of legal cases and making its
way all the way in front of the court repeatedly.
76
00:05:54,490 --> 00:06:11,031
There was obviously the big New York case that the case wasn't big, but the situation was
where a hallucination had occurred and a false citation from something that I believe was
77
00:06:11,031 --> 00:06:12,522
completely made up.
78
00:06:12,532 --> 00:06:18,065
And, you know, the poor lawyer there, mean, AI was still new.
79
00:06:18,065 --> 00:06:26,390
Somebody had to be first, right, to get kind of nailed on this for not going back and
double checking.
80
00:06:26,390 --> 00:06:36,455
But yeah, that was what caught my eye about the writing that you did was just the sheer
volume and how this trend has continued.
81
00:06:38,216 --> 00:06:39,747
was it 37 cases?
82
00:06:39,747 --> 00:06:42,188
How many cases out there have had this issue?
83
00:06:42,198 --> 00:06:49,802
Yes, so I started off reading that first case, Mata versus Avianca, but then there was
another case a months later and another case.
84
00:06:49,802 --> 00:07:00,777
And right now by my tally, and I'll explain how others are finding other cases, I think I
have 14 cases in which lawyers have gotten in trouble for using AI without checking and
85
00:07:00,777 --> 00:07:02,167
verifying the sites.
86
00:07:02,167 --> 00:07:05,729
And the cases call it hallucinated cases, fictitious.
87
00:07:05,729 --> 00:07:08,558
A most recent case called it phantom cases.
88
00:07:08,558 --> 00:07:09,558
fake cases.
89
00:07:09,558 --> 00:07:14,498
if anybody out there is trying to research these cases, use all of those synonyms.
90
00:07:14,698 --> 00:07:25,558
But then what's also shocking is that, or I think surprising and alarming, is that pro se
litigants, litigants who are representing themselves without lawyers, and a lot of people
91
00:07:25,558 --> 00:07:31,258
are saying AI is great for access to justice and people not needing to hire a lawyer.
92
00:07:31,298 --> 00:07:36,462
But pro se litigants, at least 12 by my count, have also submitted
93
00:07:36,462 --> 00:07:40,645
court filings, either complaints or pleadings or briefs.
94
00:07:40,645 --> 00:07:53,153
And that is causing a burden on the court personnel and opposing counsel to research those
cases, spend time figuring out that the cases don't exist, pointing them out to the pro se
95
00:07:53,153 --> 00:07:55,955
litigant, and then the judge who...
96
00:07:56,115 --> 00:08:05,652
Those cases say that the courts exercise what they call special solicitude, or they're a
little lenient on litigants who don't have lawyers, but they have to remind them, hey, you
97
00:08:05,652 --> 00:08:06,136
can't...
98
00:08:06,136 --> 00:08:06,616
do this.
99
00:08:06,616 --> 00:08:10,318
If you do this again, we're going to consider imposing sanctions.
100
00:08:10,318 --> 00:08:15,811
And some of the courts have imposed pretty significant sanctions on even pro se litigants.
101
00:08:15,811 --> 00:08:18,412
And then I'll tell you kind of two other categories.
102
00:08:19,033 --> 00:08:23,155
One law firm just keeps doubling down.
103
00:08:23,656 --> 00:08:29,719
It's a law firm filing cases in New York against the New York Department of Education.
104
00:08:29,719 --> 00:08:35,872
And they've won the main case, and they're entitled to their attorney's fees under this
statute.
105
00:08:36,408 --> 00:08:42,822
But they keep using ChatGPT to calculate their fee request or to like support their fee
requests.
106
00:08:42,822 --> 00:08:44,784
And they've done this eight times.
107
00:08:44,784 --> 00:08:58,823
And eight times the judges, different judges in New York, but different judges have said,
we're not accepting this fee request based on ChatGPT's calculations because in ChatGPT's
108
00:08:58,823 --> 00:09:00,364
current state.
109
00:09:00,448 --> 00:09:06,243
it's not reliable as a source for this information, but that law firm did that eight
times.
110
00:09:06,243 --> 00:09:14,550
So if I were one of those judges, I'd be kind of annoyed, but at least I guess they're
persistent and zealously representing their client.
111
00:09:14,550 --> 00:09:19,473
And then one more category I'll point out, one expert witness.
112
00:09:19,494 --> 00:09:24,717
there's only two experts that I've come across so far, but one in an actual case.
113
00:09:25,218 --> 00:09:27,839
think this was also, I think this was in a state case.
114
00:09:27,839 --> 00:09:29,922
It was someone objecting to
115
00:09:29,986 --> 00:09:32,898
the way a will or an estate was being administered.
116
00:09:32,898 --> 00:09:41,892
And the expert witness tried to use an AI tool to calculate the value of real estate.
117
00:09:41,892 --> 00:09:46,435
And again, the judge was basically was like, there's no backup.
118
00:09:46,435 --> 00:09:51,938
This doesn't meet the standard for admissibility of expert testimony.
119
00:09:51,938 --> 00:09:53,426
There's actual rules.
120
00:09:53,426 --> 00:09:55,700
In some states, it's called the Frye standard.
121
00:09:55,700 --> 00:10:00,066
In other states, people follow the Dober or the Daubert standard.
122
00:10:00,066 --> 00:10:10,180
But this expert hadn't followed any of those standards and had just tried to submit this
expert opinion, this expert report based on AI and the judge was kind of having none of
123
00:10:10,180 --> 00:10:10,880
it.
124
00:10:11,222 --> 00:10:14,966
So those are the four categories that I've come across so far.
125
00:10:14,966 --> 00:10:24,251
Yeah, so AI has actually made some strides recently in regard to calculation of numbers.
126
00:10:24,272 --> 00:10:36,459
I've fallen down a few rabbit holes in AI, and one of them was recently I learned about
tokenizers and how tokenizers translate text into tokens.
127
00:10:36,459 --> 00:10:41,722
And if you look at how it originally tokenized numbers,
128
00:10:41,806 --> 00:10:57,406
Um, it was problematic for arithmetic operations and, um, they've now, uh, kind of change
strategies around that, but it's just, I'm sure you've seen the, um, there was a, I'll
129
00:10:57,406 --> 00:11:07,406
call it a meme going around about if you went to chat GPT, maybe, I don't know, six, eight
months ago and asked how many, how many times is the word, the letter R and strawberry, it
130
00:11:07,406 --> 00:11:11,110
would say three or one or, um,
131
00:11:11,220 --> 00:11:15,392
or four, would just give kind of inconsistent numbers.
132
00:11:15,392 --> 00:11:25,256
you know, that's, that's one of the challenges with, with, with AI is AI is essentially a
mathematical representation of its training data.
133
00:11:25,256 --> 00:11:26,536
That's really all it is.
134
00:11:26,536 --> 00:11:26,916
Right.
135
00:11:26,916 --> 00:11:38,351
And it, it can perform some absolute magical tasks as a result of this that are really
above the understanding level of even the engineers who've built these systems.
136
00:11:38,351 --> 00:11:40,502
It's really, um,
137
00:11:40,502 --> 00:11:52,677
It's really impressive what output they're able to come up with, but it's a bit of a black
box and it is not deterministic and hallucinations, even when you dial temperature down to
138
00:11:52,677 --> 00:11:56,249
zero are very difficult to control.
139
00:11:56,249 --> 00:11:59,310
there's, really have to be cautious.
140
00:11:59,310 --> 00:12:09,454
And it sounds like these are trained lawyers who understand the importance of properly
citing
141
00:12:09,454 --> 00:12:19,743
cases and yet they're still doing this after, you know, the attorney in New York was made
the poster child for bad practice.
142
00:12:19,743 --> 00:12:21,385
Like how is this happening?
143
00:12:21,385 --> 00:12:23,326
How is this continuing to happen?
144
00:12:23,340 --> 00:12:28,083
Yeah, it boggles the mind, honestly, because these cases are in the news.
145
00:12:28,083 --> 00:12:37,318
They're written judicial opinions that you can find on all the legal research databases,
but also they're in articles, they're in the newspapers.
146
00:12:37,358 --> 00:12:40,430
And to me, it's surprising that this is still happening.
147
00:12:40,430 --> 00:12:49,805
And I feel like on the pro se side, I can kind of understand that the average citizen who
hasn't gone to law school and doesn't read like legal journals and legal blogs might not
148
00:12:49,805 --> 00:12:51,266
know this is happening.
149
00:12:51,266 --> 00:12:59,851
And we as a profession should probably do a better job of educating the general public
about these use cases for law.
150
00:12:59,851 --> 00:13:01,822
But lawyers, I we have a duty.
151
00:13:01,822 --> 00:13:12,759
The ethics obligations and the rules of professional conduct, we have a duty to, quote,
stay abreast of changes, technological changes, the benefits and risks.
152
00:13:12,759 --> 00:13:19,136
So a lot of judges across the country are trying to get ahead of this and issue
153
00:13:19,136 --> 00:13:20,587
standing orders.
154
00:13:20,587 --> 00:13:32,133
Bloomberg and Lexis both have individual trackers of all that and they keep adding to this
these databases and spreadsheets of judges across the country, federal and state court,
155
00:13:32,133 --> 00:13:41,198
who have taken the time to add to or issue new rules in their their chambers in their
courts related to AI.
156
00:13:41,198 --> 00:13:42,342
It's been really interesting.
157
00:13:42,342 --> 00:13:48,972
I've been having my students read a lot of these judges standing rules and understand to
certain the differences.
158
00:13:48,992 --> 00:13:52,613
and see certain judges are most concerned about hallucinations.
159
00:13:52,613 --> 00:14:06,909
And so those judges are requiring anybody who files anything in their court to certify
that every citation in that brief or pleading or whatever is accurate and stands for the
160
00:14:06,909 --> 00:14:09,390
proposition for which it's being stated.
161
00:14:09,390 --> 00:14:18,286
Other judges are concerned about things like confidentiality and making sure that lawyers
have an inadvertently disclosed attorney client
162
00:14:18,286 --> 00:14:24,726
privileged information by using a public AI tool that's been training on that information.
163
00:14:24,726 --> 00:14:37,062
So different judges across the country have focused on different things, but all of them
have been trying to raise awareness about our ethical obligation when we sign our names on
164
00:14:37,062 --> 00:14:38,749
documents that we submit to court.
165
00:14:38,749 --> 00:14:42,162
I mean, the famous federal rule 11.
166
00:14:42,162 --> 00:14:46,343
Roll of Civil Procedure we're signing that we have checked everything.
167
00:14:46,343 --> 00:14:57,307
And so now there's an added layer, an added duty that we have to undertake whenever we're
engaging with these tools that might be baked into whatever software we're using.
168
00:14:57,307 --> 00:15:08,000
we have to, I kind of feel like this is an example of why all this pressure to be fast,
accelerate and expedite, it's a little premature in my opinion.
169
00:15:08,000 --> 00:15:09,562
feel like we don't,
170
00:15:09,562 --> 00:15:11,994
We shouldn't be focused so much on being fast.
171
00:15:11,994 --> 00:15:13,905
We should focus on being right.
172
00:15:13,905 --> 00:15:26,862
And so if it takes us an extra hour or two or et cetera to check, we have to undertake
that extra step or get someone in our firms, in our offices to be the checker.
173
00:15:26,862 --> 00:15:35,348
It doesn't have to always be the same person, but we need to incorporate the checking into
our protocol, our work protocols and our writing workflow.
174
00:15:35,478 --> 00:15:47,270
Isn't there an implicit certification when you submit a instrument to the court that it's
you've done these things that so they want a separate certification.
175
00:15:47,506 --> 00:15:50,386
So rule 11 is the one that we talk about a lot.
176
00:15:50,386 --> 00:16:02,166
And when we sign a pleading or sign a brief, we are certifying that it's being submitted
for a proper purpose, not for an improper purpose, not to delay or harass the other side.
177
00:16:02,386 --> 00:16:11,618
that we've, it's, I'm blanking on all the exact language, but that it's based in law and
it's based in the facts that are part of the case.
178
00:16:11,618 --> 00:16:14,859
But now we actually have another duty.
179
00:16:14,859 --> 00:16:20,340
A lot of experts are saying we don't necessarily need to change our rules.
180
00:16:20,340 --> 00:16:23,641
Our rules can encompass these changes in technology.
181
00:16:23,641 --> 00:16:36,265
But we need to raise awareness that part of that signature process now is double checking
that the citations, the statutes, the cases, the regulations that we're citing were not
182
00:16:36,265 --> 00:16:40,236
just fabricated by AI that they actually exist.
183
00:16:40,264 --> 00:16:43,474
and that they stand for what we're citing them for.
184
00:16:43,474 --> 00:16:49,300
I a lot of the legal AI tools are saying, we reduce hallucinations.
185
00:16:49,434 --> 00:17:00,289
But respectfully, I will say, as a teacher, I've been testing out those tools on rules and
laws that I know really well because I've been teaching them for so long.
186
00:17:00,289 --> 00:17:07,384
And I know the cases that should pop up immediately that are most on point and mandatory.
187
00:17:07,384 --> 00:17:10,616
precedent, not just persuasive authority.
188
00:17:10,756 --> 00:17:20,293
And even though the cases that I'm getting from these legal tools exist, they're not
actually the most on point or the most current.
189
00:17:20,293 --> 00:17:30,751
don't actually, the summaries of these cases don't always hit all the required elements of
the rule, which are different from factors that a court might weigh.
190
00:17:30,751 --> 00:17:35,714
So we're getting there, but I think we still have some work to do.
191
00:17:35,714 --> 00:17:36,255
Yeah.
192
00:17:36,255 --> 00:17:43,740
So what are like best practices for supervising junior lawyers and staff using these
GEN.AI tools?
193
00:17:45,442 --> 00:17:47,954
Well, I'm so glad you mentioned staff too.
194
00:17:47,954 --> 00:17:53,868
I we have, I think our entire workplace communities need to be educated in these tools.
195
00:17:54,088 --> 00:18:06,658
And in my role as a supervisor of law students and the mentor to law students, the way I'm
approaching it is to kind of set up, when it comes to research, I think we need to
196
00:18:06,658 --> 00:18:12,934
approach research like we would any project, not just taking what's given to us at first
glance.
197
00:18:12,934 --> 00:18:22,598
I like to explain it like breadcrumb trails, teaching our students that if you're going to
research something, sure, you can start with AI, but you actually need to do the same
198
00:18:22,598 --> 00:18:28,381
search in what we call terms and connectors searches, like the Boolean searches.
199
00:18:28,381 --> 00:18:32,442
Do the same search with natural language, like Google-style searches.
200
00:18:32,442 --> 00:18:36,264
Try the same search on two different legal research platforms.
201
00:18:36,304 --> 00:18:41,442
Then, if you're getting the same pool of cases, you can feel comfortable that you have the
right
202
00:18:41,442 --> 00:18:46,944
body of law and you can check and make sure it's the most up to date and the most
accurate.
203
00:18:46,944 --> 00:18:56,328
But just relying on AI to do it, just because it's quick, that's not responsible in my
opinion now.
204
00:18:56,348 --> 00:19:02,171
When I was a practicing attorney, I always used to worry about whether I'd miss something.
205
00:19:02,171 --> 00:19:06,663
And so these are just the same techniques I would use back then before AI even started.
206
00:19:06,663 --> 00:19:09,614
would start a research trail from
207
00:19:09,698 --> 00:19:12,320
And I like the breadcrumb approach.
208
00:19:12,320 --> 00:19:15,773
Start a breadcrumb trail from five different starting points.
209
00:19:15,773 --> 00:19:28,123
And if you end up with the same pool of cases by starting with AI, doing a natural
language search, doing terms and connectors, trying two different platforms, then you know
210
00:19:28,123 --> 00:19:29,964
you've got the right pool of cases.
211
00:19:29,998 --> 00:19:31,078
Gotcha.
212
00:19:33,139 --> 00:19:45,524
speaking of using AI, I think that I just saw a study maybe this morning about, um, and I
didn't, the outcome of this study, and it may have been informal.
213
00:19:45,524 --> 00:19:59,830
I don't know that it was like a peer reviewed academic study, but it, it somehow measured
the critical thinking capacity of kids who are using gen AI in writing.
214
00:20:00,154 --> 00:20:15,435
and actually in different capacities and found that those that are relying on these tools
have a reduced capacity, which for me, the first thing that went through my head is
215
00:20:15,435 --> 00:20:16,706
they're using it wrong.
216
00:20:16,706 --> 00:20:20,808
If you're relying, AI should augment what you do.
217
00:20:20,808 --> 00:20:24,401
I use it all the time for critical thinking processes.
218
00:20:24,401 --> 00:20:29,234
I have a co-CEO custom GPT.
219
00:20:29,336 --> 00:20:36,326
that I've uploaded all sorts of like our core values, our ideal customer profile
information.
220
00:20:36,326 --> 00:20:45,010
We have a financial deck with all our finance information, our pitch deck that we used to
basically describe what we do.
221
00:20:45,010 --> 00:20:46,911
And I use it to brainstorm ideas.
222
00:20:46,911 --> 00:20:51,572
Like we had a, we have it, have an investor who challenged us.
223
00:20:51,572 --> 00:20:53,613
We're having amazing growth right now.
224
00:20:53,613 --> 00:20:58,194
And he said, what if you, what if you doubled or tripled your marketing budget?
225
00:20:58,194 --> 00:20:59,454
How would you,
226
00:20:59,598 --> 00:21:01,398
how would you spend that money?
227
00:21:01,578 --> 00:21:03,418
And I was like, wow, I don't know.
228
00:21:03,418 --> 00:21:13,058
So first thing I did is I went to my co CEO GPT and I put in, I had all the budget line
items documented and said, what else if we were to double or triple?
229
00:21:13,058 --> 00:21:25,878
And I got great ideas, but I had to filter through them and it was just a, you know, it
was a shotgun and I needed to pick the pieces out that were valuable.
230
00:21:25,878 --> 00:21:27,018
it's,
231
00:21:27,246 --> 00:21:33,906
How do we embrace Gen.ai and not lose our, I heard you use the term writer's identity at
one point.
232
00:21:33,906 --> 00:21:34,760
How do we do that?
233
00:21:34,760 --> 00:21:35,610
Yes.
234
00:21:35,740 --> 00:21:38,321
my gosh, so many things I want to respond to what you just said.
235
00:21:38,321 --> 00:21:52,167
First, on the kids study, I have been encouraging my law students not to use AI as a
substitute for their own critical thinking, but instead, like you said, to kind of help be
236
00:21:52,167 --> 00:21:54,908
a supplement or help enhance their creativity.
237
00:21:54,908 --> 00:22:04,532
But on the critical thinking part, I've been using, so Khan Academy, which I never used
when I was in high school or college, but a lot of my students have.
238
00:22:04,590 --> 00:22:07,410
They have an AI tool called Conmigo.
239
00:22:07,410 --> 00:22:11,170
It's a play on the Spanish for come with or with me, I think.
240
00:22:11,170 --> 00:22:12,970
That's their AI tool.
241
00:22:13,050 --> 00:22:19,290
It is so awesome because it's set up like a Socratic tool where it doesn't just give you
the answer.
242
00:22:19,290 --> 00:22:22,930
It actually helps you critically think through problems.
243
00:22:22,930 --> 00:22:25,430
And here's a little hint to the lawyers out there.
244
00:22:25,430 --> 00:22:28,710
If you click on the humanities button, it knows law.
245
00:22:29,230 --> 00:22:34,434
So while it's set up for like K through 12, I think it can be useful in law school.
246
00:22:34,434 --> 00:22:46,264
And to your point about students or younger people using this, but maybe skipping over the
critical thinking, there's a writing tutor through Conmigo, but it won't just write it for
247
00:22:46,264 --> 00:22:46,644
you.
248
00:22:46,644 --> 00:22:54,641
It asks you questions and like a Socratic tutor makes you have to think critically through
what you wanna write about.
249
00:22:54,641 --> 00:23:00,015
And then you write about it and then it asks you to think critically about how you wanna
improve it.
250
00:23:00,015 --> 00:23:03,942
So I think in education and legal education,
251
00:23:03,942 --> 00:23:09,343
using AI tools that are set up and designed to be more like your GPT.
252
00:23:09,343 --> 00:23:13,544
Not just give you the answers, but to make you think.
253
00:23:13,925 --> 00:23:22,067
And then there's those prompting techniques, which I'm not an expert at these, but tree of
thought, where you ask the AI to do a tree of thought prompt.
254
00:23:22,067 --> 00:23:33,270
And if you're brainstorming different solutions to problems, it can go down three
different paths for solutions to problems or brainstorming or difference of
255
00:23:33,270 --> 00:23:41,175
of opinions, like lawyers can use it to debate different points of view, or do counter
arguments and arguments back and forth.
256
00:23:41,175 --> 00:23:50,320
And then the chain of thought prompt technique, which is show your work, kind of step by
step moving through a critical thinking analysis.
257
00:23:50,320 --> 00:24:01,634
So I think if we can incorporate and educate all of us on how not to use these tools just
to outsource our thinking, because of course, we're going to atrophy, we're not going to
258
00:24:01,634 --> 00:24:03,655
learning, we're going to go backwards.
259
00:24:03,655 --> 00:24:09,358
But if we can set it up so it's pushing us harder, it's leveling us up.
260
00:24:09,358 --> 00:24:12,500
I identify as a writer, first and foremost.
261
00:24:12,500 --> 00:24:15,241
I love the concept of writer identity.
262
00:24:15,822 --> 00:24:25,048
When I fill out forms, I travel a lot, so when I fill out forms for, I don't know,
immigration or whatever, and I have to put my occupation, I put writer.
263
00:24:25,048 --> 00:24:29,592
Way before my, quote, fancier titles of lawyer and professor, because I'd
264
00:24:29,592 --> 00:24:32,484
deeply in my soul identify as a writer.
265
00:24:32,565 --> 00:24:35,947
So I could be really threatened by AI, right?
266
00:24:36,008 --> 00:24:37,769
But I'm not, I love it.
267
00:24:37,769 --> 00:24:40,101
I use it as a super thesaurus.
268
00:24:40,101 --> 00:24:51,542
Things I can't do with a traditional dictionary thesaurus, I can ask it question, I can
have it help me think of the perfect, give me 10 examples of this verb that I'm trying to
269
00:24:51,542 --> 00:24:53,443
capture this tone with.
270
00:24:53,443 --> 00:24:56,585
So I think there's ways like that that we can.
271
00:24:56,834 --> 00:25:05,158
you know, figure out first of all who we are, how we already identify as writers, but what
aspects of writing maybe do we need a little help with?
272
00:25:05,158 --> 00:25:08,339
What aspects of writing do we find more tedious?
273
00:25:08,339 --> 00:25:18,624
Use it for things that can help us stay in a flow state more on the aspects of writing or
our work that we love doing and we feel amazing doing.
274
00:25:18,624 --> 00:25:22,758
Ethan Malik who wrote the book, he's a Wharton professor, wrote the book
275
00:25:22,758 --> 00:25:27,099
I'm using that as my textbook for my AI class at school.
276
00:25:27,099 --> 00:25:35,362
He challenges all of us to spend 10 hours using these tools for things that we love or
things that we just think are fun.
277
00:25:35,362 --> 00:25:47,745
And we'll learn so much about how these tools can make us better and level up instead of
just using it to cheat or get around doing our real work.
278
00:25:47,918 --> 00:25:49,807
So that's my take on it.
279
00:25:49,807 --> 00:25:52,768
Yeah, I'm a big fan of Ethan Molyke.
280
00:25:52,768 --> 00:25:56,690
I think he's great.
281
00:25:57,150 --> 00:26:02,112
He's a little more bullish than me on AI's capacity to comprehend.
282
00:26:02,112 --> 00:26:05,013
I'm a little more bearish on that.
283
00:26:05,013 --> 00:26:12,076
There's been several studies that have demonstrated a counter case for AI's ability to
comprehend.
284
00:26:12,076 --> 00:26:16,538
There was one by the Facebook intelligence team.
285
00:26:16,690 --> 00:26:20,392
It's called the GSM 8K symbolic test.
286
00:26:20,392 --> 00:26:25,015
The GSM 8K is grade school math.
287
00:26:25,015 --> 00:26:27,016
There's 8,000 questions.
288
00:26:28,697 --> 00:26:40,564
What this study did was change the test in immaterial ways and present it to AI and then
measure its ability to respond.
289
00:26:40,564 --> 00:26:43,150
Some of the changes were very simple.
290
00:26:43,150 --> 00:26:45,410
I'm going to oversimplify here because
291
00:26:45,410 --> 00:26:53,515
We don't want to take too much time, but you know, Sarah went to the store and got 10
apples and they changed Sarah's name to Lisa.
292
00:26:53,515 --> 00:26:56,997
That little change, depending on the sophistication of the model, right?
293
00:26:56,997 --> 00:27:02,880
It, because again, that was part of their, the, GSM AK battery of tests was part of their
training material.
294
00:27:02,880 --> 00:27:05,461
So first thing it does is defaults to that.
295
00:27:05,522 --> 00:27:13,726
So the symbolic piece is the new part of the GSM AK and it really threw things off
dramatically.
296
00:27:13,774 --> 00:27:19,702
So I'm not bullish on right now AI's ability to comprehend.
297
00:27:20,604 --> 00:27:21,145
He is.
298
00:27:21,145 --> 00:27:23,799
That's my only disagreement with him, though.
299
00:27:23,799 --> 00:27:26,303
think he's a great guy to follow on LinkedIn.
300
00:27:26,303 --> 00:27:27,634
He has great content.
301
00:27:28,462 --> 00:27:31,202
I mean, I'm kind of in the same vein.
302
00:27:31,202 --> 00:27:36,282
I don't think that these tools are ready to replace legal writers.
303
00:27:36,282 --> 00:27:45,370
I as a legal writing professor, I teach that all good legal writing is structured around
rules.
304
00:27:45,370 --> 00:27:51,752
And in legal rules, they can either be element, like a checklist of required elements.
305
00:27:51,793 --> 00:27:54,894
I use the analogy when I teach my students to drive a car.
306
00:27:54,894 --> 00:27:56,394
You know, need a couple things.
307
00:27:56,394 --> 00:28:04,834
You have to have keys, working battery, unless it's like an electric car, keys, working
battery, fuel, and four inflated tires.
308
00:28:04,834 --> 00:28:07,174
If one of those is missing, the car doesn't move.
309
00:28:07,174 --> 00:28:08,594
That's elements.
310
00:28:08,814 --> 00:28:14,954
And then there's fact rules based on factors which a court might weigh, which is like
searching for an apartment.
311
00:28:14,954 --> 00:28:21,494
You think you have a bunch of factors, but you might compromise one or the other if you
had a great location for a cheaper price, whatever.
312
00:28:22,034 --> 00:28:24,832
But AI right now, when I've tried it,
313
00:28:24,832 --> 00:28:28,996
it doesn't understand the difference between an elements rule and a factor-based rule.
314
00:28:28,996 --> 00:28:32,408
But that can be a completely different legal analysis.
315
00:28:32,449 --> 00:28:42,958
So I have found that I've had to teach the AI tool the difference between elements and
factors before it can give me a well-structured legal rule.
316
00:28:42,958 --> 00:28:45,041
So I kind of agree with you.
317
00:28:45,041 --> 00:28:52,256
I mean, this is a different example, obviously, but all this touting out there about speed
and acceleration.
318
00:28:52,710 --> 00:29:00,838
it does not make me faster as a legal writer because the stuff that it gives me right now
is not actually accurate in terms of structure.
319
00:29:00,838 --> 00:29:10,136
And when I'm teaching future lawyers how to write well in the legal space, everything
boils down to structure of the rule.
320
00:29:10,136 --> 00:29:12,188
The whole thing is based on the rule.
321
00:29:12,228 --> 00:29:18,574
So I think we've got some work to do, but I think we can train these tools to understand
why that's important.
322
00:29:18,817 --> 00:29:28,009
Bad legal, it can do bad legal writing, can write quickly, but that's not gonna solve the
problems that we need to be solving for our clients.
323
00:29:28,009 --> 00:29:31,522
And it's gonna annoy a lot of judges who have to read it.
324
00:29:31,522 --> 00:29:31,982
Yeah.
325
00:29:31,982 --> 00:29:38,356
And you know, I've heard, I've had debates about this here on the podcast and I've heard,
well, it doesn't matter.
326
00:29:38,356 --> 00:29:40,727
And this was more around reasoning.
327
00:29:40,727 --> 00:29:43,228
Comprehension is a precursor to reasoning.
328
00:29:43,228 --> 00:29:49,412
You can't reason your way to an answer if you can't comprehend the problem is my argument.
329
00:29:49,412 --> 00:29:54,684
So I think it does matter and people are, I, know, there's a lot of debate around this.
330
00:29:54,684 --> 00:30:01,270
And I think the reason it's important to understand if these models are reasoning and if
they're comprehending the, the input or the
331
00:30:01,270 --> 00:30:08,513
prompt or the question is because it helps you understand where to use the tool and where
not to.
332
00:30:08,513 --> 00:30:15,696
And it also gives you a lens through which to scrutinize the output.
333
00:30:16,156 --> 00:30:22,239
You should be skeptical today and probably for the foreseeable future.
334
00:30:22,239 --> 00:30:30,232
So I do think it is a relevant debate on whether or not these, because the argument is
335
00:30:30,232 --> 00:30:32,823
Well, we don't know how people reason, right?
336
00:30:32,823 --> 00:30:46,866
The brain is very poorly understood, and it's, and it's inner workings and you know, it's
a collection of neurons firing in a way that generates amazing things, writing and art and
337
00:30:46,866 --> 00:30:49,607
speech and creativity.
338
00:30:49,647 --> 00:31:00,300
And you know, we see some of these things come out of AI, but it's like correlation does
not imply causation is what I go back to just because something
339
00:31:01,303 --> 00:31:04,895
you know, output something that looks similar to something else.
340
00:31:04,895 --> 00:31:07,157
It doesn't mean it's the same driving force.
341
00:31:07,157 --> 00:31:11,400
So yeah, I've had, uh, I've had the debate and continue to have the debate.
342
00:31:11,400 --> 00:31:14,112
It's a relevant topic, whether or not these things reason.
343
00:31:14,112 --> 00:31:21,277
And I think the reasoning, um, terminology is being thrown out there way too early and way
too often.
344
00:31:21,277 --> 00:31:28,842
Um, but you know, I'm, people see it, people see that differently and that, and that's
okay.
345
00:31:29,454 --> 00:31:30,994
Yes, absolutely.
346
00:31:30,994 --> 00:31:31,314
Absolutely.
347
00:31:31,314 --> 00:31:40,434
That's why I kind of like with my students, at least I like the show your work kind of
thing, that chain of thought prompting, because you can't just leap from A to Z without
348
00:31:40,434 --> 00:31:41,614
explaining your reasoning.
349
00:31:41,614 --> 00:31:44,974
You have to walk, and then you start to see the flaws in the reasoning.
350
00:31:44,974 --> 00:31:52,546
If there is a flaw, there's assumption, there's logic leaps, there's flawed assumptions,
false assumptions, et cetera.
351
00:31:52,546 --> 00:31:53,327
Yeah.
352
00:31:53,327 --> 00:31:57,449
So, um, I use a tool, it's a custom GPT.
353
00:31:57,449 --> 00:31:59,981
It's fairly new in my tool belt.
354
00:31:59,981 --> 00:32:03,994
it's called prompt GPT and it helps me write prompts.
355
00:32:03,994 --> 00:32:09,868
But the last time you and I spoke, we talked about like strategies for prompt engineering
in a legal context.
356
00:32:09,868 --> 00:32:15,742
Like what is your, do you have any advice for people that are trying to wrap their heads
around that?
357
00:32:15,788 --> 00:32:16,509
Yes.
358
00:32:16,509 --> 00:32:16,898
my gosh.
359
00:32:16,898 --> 00:32:18,110
This is one of my favorite topics.
360
00:32:18,110 --> 00:32:30,130
I actually just wrote an article on this too, because I found, and this is me being a
little quirky, but I found that my own interaction with AI taught me how to be a better
361
00:32:30,130 --> 00:32:39,157
communicator to human beings in terms of prompting, if I needed them to do something, like
if I'm supervising someone or being a mentor.
362
00:32:39,157 --> 00:32:45,462
So I wrote a little piece about this, but I have learned several great techniques of
prompting.
363
00:32:45,762 --> 00:32:51,556
that are just kind of intuitive in terms of getting good output out of humans too.
364
00:32:51,556 --> 00:33:03,485
one example, I mean, I didn't make this up, but the original prompting engineer gurus were
telling us, give it a lot of context, give it a role, give it context for what tasks
365
00:33:03,485 --> 00:33:05,957
you're gonna give it, give it the task.
366
00:33:05,957 --> 00:33:12,226
If it's a law related task, give it the sort of phase or stage of.
367
00:33:12,226 --> 00:33:18,731
the litigation that you're working on or the stage of the transactional negotiation you're
working on.
368
00:33:18,731 --> 00:33:21,273
So it's context again for the task.
369
00:33:21,273 --> 00:33:25,936
Give it the format you want the output in and then give it the tone or the style.
370
00:33:26,117 --> 00:33:38,566
And then what I think is so fun for people who haven't engaged with this too much yet
already is let it do its thing and then change one of those parameters that you gave it.
371
00:33:38,752 --> 00:33:49,891
and see how it adjusts, like the tone, make something more academic sounding or more
sophisticated sounding or more professional sounding, make it less, make it more humorous.
372
00:33:50,091 --> 00:33:51,783
So that's one thing, give it context.
373
00:33:51,783 --> 00:34:01,221
And kind of tying this back to what I said at the beginning, I feel like when I was an
associate in a law firm and my bosses would give me an assignment, they wouldn't give me
374
00:34:01,221 --> 00:34:02,261
any of that context.
375
00:34:02,261 --> 00:34:04,323
And I had no idea what I was doing.
376
00:34:04,904 --> 00:34:08,056
It was the fake until you make it era of my life, which I
377
00:34:08,056 --> 00:34:09,467
highly do not recommend.
378
00:34:09,467 --> 00:34:11,087
Talk about bad well-being.
379
00:34:11,087 --> 00:34:12,290
I had no idea what I was doing.
380
00:34:12,290 --> 00:34:15,853
Even though I was smart and hardworking, just give me some context.
381
00:34:15,853 --> 00:34:17,834
I could have done such a better job.
382
00:34:18,075 --> 00:34:18,956
Examples.
383
00:34:18,956 --> 00:34:24,570
We might have heard the AI terminology of few shot, one shot, or zero shot.
384
00:34:24,570 --> 00:34:31,166
And that's terminology just that means, you giving it no examples, one example, or more
than one example?
385
00:34:31,567 --> 00:34:35,604
Apparently, the studies show that it does better work if
386
00:34:35,604 --> 00:34:41,258
if you give it an example, unless you want it to be wildly creative and do its own thing.
387
00:34:41,419 --> 00:34:48,164
But if you don't, if you want it to give you something that looks like a document you've
done before, give it examples.
388
00:34:48,805 --> 00:34:59,594
I learned from articles that I've read, I didn't again, didn't make this up, it responds,
these tools respond better to positive instruction rather than negative instruction.
389
00:34:59,594 --> 00:35:01,014
So give it.
390
00:35:01,014 --> 00:35:04,357
positive or concrete affirmative instructions.
391
00:35:04,357 --> 00:35:09,460
Do this, do that, not don't do this or stay away from that.
392
00:35:09,541 --> 00:35:23,873
And again, just kind of being funny, I think as an associate, I reacted better when my
bosses would say, know, highlight this factor or be assertive in this realm, not don't
393
00:35:23,873 --> 00:35:30,414
mention that theory or like the positive and there's science behind this that our brains
take an extra step.
394
00:35:30,414 --> 00:35:35,234
to process negative instructions, which makes us slower and less effective.
395
00:35:35,854 --> 00:35:39,134
Not to make this about me, but I take boxing lessons, for instance.
396
00:35:39,134 --> 00:35:46,674
That really helped me manage my own well-being and performance anxiety and public speaking
anxiety, et cetera.
397
00:35:46,674 --> 00:35:59,170
And I laugh now because my trainer, his name is Lou, when he gives me positive
instructions, like hands up, boxer stance, move your shoulders, move your head, I do it.
398
00:35:59,170 --> 00:36:11,513
But when he says, stop doing that, stop dragging your glove down or stop dropping your
hands, it takes me a beat to process the thing he's telling me not to do.
399
00:36:11,574 --> 00:36:13,694
And then I have to remember what to do.
400
00:36:13,694 --> 00:36:20,756
So I think that is relevant in prompting that if we stick to positive instructions, the
tools function better, apparently.
401
00:36:21,010 --> 00:36:27,328
I also kind of love the studies that have been done about what they call emotional
prompting.
402
00:36:27,552 --> 00:36:34,728
Now, I love interacting with these tools emotionally, because I'm just that kind of person
and that kind of teacher and that kind of writer.
403
00:36:34,728 --> 00:36:42,574
So when I tell it, oh, wow, that's amazing, or you did a great job with that, or I'm kind
of even more casual, I'll be like, you rock.
404
00:36:42,574 --> 00:36:44,036
It'll come back to me.
405
00:36:44,036 --> 00:36:45,069
You rock too.
406
00:36:45,069 --> 00:36:47,278
I love co-creating with you.
407
00:36:47,362 --> 00:36:56,966
So just think about how funny that is, that if we use more positive emotional prompting in
our own supervising, we might get better work product out of her.
408
00:36:57,122 --> 00:37:00,204
supervisees, but apparently it works with AI.
409
00:37:00,204 --> 00:37:04,847
If you give it positive emotional prompting, it works harder.
410
00:37:05,548 --> 00:37:07,449
You mentioned meta prompting.
411
00:37:07,449 --> 00:37:14,473
You didn't call it that, you know, telling it how to prompt, or asking it how we can be a
better prompter.
412
00:37:14,794 --> 00:37:25,521
I think that's amazing, like asking, we can give it a prompt, but then ask that one final
question, that one extra question, how can I, or what else do you need to know to do a
413
00:37:25,521 --> 00:37:27,232
good job with this task?
414
00:37:27,278 --> 00:37:29,158
So I think that's interesting too.
415
00:37:29,158 --> 00:37:36,100
And then we talked about like chain of thought prompting, asking it to just explain things
step by step.
416
00:37:36,100 --> 00:37:46,193
I like tree of thought prompting for lawyering because we are constantly debating
different perspectives on things and asking AI to generate what's called a tree of thought
417
00:37:46,193 --> 00:37:53,565
prompt and come up with almost a dialogue among three different points of view about a
particular issue.
418
00:37:53,565 --> 00:37:56,514
It helps us brainstorm, be more creative.
419
00:37:56,514 --> 00:37:59,355
Think of counter arguments we might not have thought of.
420
00:37:59,355 --> 00:38:01,756
Think of arguments we might not have thought of.
421
00:38:01,756 --> 00:38:06,538
Game out what the other side might be arguing, et cetera.
422
00:38:06,538 --> 00:38:17,703
So I think those are kind of the things that I've learned over, wow, I guess almost two
years now of playing around with these tools and experimenting and make mistakes.
423
00:38:17,883 --> 00:38:25,066
I like that 10-hour challenge that Ethan Molyk put out there because it's not, be perfect
at this immediately.
424
00:38:25,066 --> 00:38:37,677
We got to practice and play around with this and make a ton of mistakes and let it make
mistakes so we can discern what it's good at and what it's not yet good at and not be
425
00:38:37,677 --> 00:38:41,642
frustrated or disappointed when it doesn't understand our instructions.
426
00:38:41,642 --> 00:38:42,622
Yeah.
427
00:38:43,423 --> 00:38:58,597
So you and I talked a little bit about legal research tools and there's a lot of debate on
how well suited today's technology is for AI technology for legal research.
428
00:38:58,597 --> 00:39:08,622
The Stanford study from earlier, well, I guess that was last year now, that came out
highlighted some challenges and they were a little broad.
429
00:39:08,622 --> 00:39:11,844
They were a lot broad actually in their definition of hallucination.
430
00:39:11,844 --> 00:39:19,240
Some things weren't hallucinations that they classified as such like missing information
or like that's not a hallucination.
431
00:39:19,240 --> 00:39:23,774
That's just a incomplete answer essentially.
432
00:39:23,774 --> 00:39:31,930
But you know, there, does seem like there's a shifting paradigm, you know, from
traditional legal research to AI assisted legal research.
433
00:39:31,930 --> 00:39:35,392
Like what does that picture look like from your perspective?
434
00:39:35,896 --> 00:39:48,432
I mean, right now, I honestly feel like I gave it, I tried a querier prompt this morning,
again, just to test out, has it evolved in the last month really since I've been super
435
00:39:48,432 --> 00:39:49,832
focused on this?
436
00:39:50,873 --> 00:40:01,527
And I gave it a pretty easy, what I thought an easy example, like I wanna set up a legal
research assignment for my students about when and whether you can serve a litigant
437
00:40:01,527 --> 00:40:03,018
through social media.
438
00:40:03,018 --> 00:40:07,700
if alternative means of serving pleadings is not available.
439
00:40:07,700 --> 00:40:17,584
And so I asked the tools, you know, find me cases in which litigants have been able to
serve other litigants via Instagram.
440
00:40:17,844 --> 00:40:21,766
It gave me three examples, three cases back very confidently.
441
00:40:21,766 --> 00:40:23,446
I wrote these down so I could tell you.
442
00:40:23,446 --> 00:40:25,307
The first case was not about Instagram.
443
00:40:25,307 --> 00:40:26,948
It was about email.
444
00:40:27,248 --> 00:40:30,229
It was a service case, but it was about email, not Instagram.
445
00:40:30,229 --> 00:40:32,710
And I know Instagram cases exist, by the way.
446
00:40:33,263 --> 00:40:39,269
Then it gave me another case about privacy and social media issues not related to service
at all.
447
00:40:39,269 --> 00:40:46,957
And then it gave me a case, a disciplinary action against a lawyer who did improper things
on social media.
448
00:40:46,957 --> 00:40:49,159
So I didn't get any helpful cases.
449
00:40:49,700 --> 00:40:55,426
Now when I have gently pushed back on some of these tools and said, you know,
450
00:40:55,970 --> 00:40:57,931
That's not a hallucination.
451
00:40:57,931 --> 00:41:01,073
All those cases they gave me exist, but it just didn't help me.
452
00:41:01,073 --> 00:41:03,534
And I know cases exist out there.
453
00:41:04,055 --> 00:41:06,996
The feedback I've gotten is that I'm using it wrong.
454
00:41:07,176 --> 00:41:13,760
But I'm using it as a person with 30 years of legal experience would use it.
455
00:41:13,900 --> 00:41:18,202
And I'm also using it the way a first-year law student would use it.
456
00:41:18,503 --> 00:41:20,964
And I've tried both approaches.
457
00:41:20,964 --> 00:41:24,556
And I still get things like that where I don't find it.
458
00:41:24,556 --> 00:41:25,306
that helpful.
459
00:41:25,306 --> 00:41:32,912
Now maybe that's a quirky research issue, but it's also a legitimate legal research issue
that a lawyer would ask the question.
460
00:41:32,912 --> 00:41:38,415
So again, it of goes back to the advice I mentioned earlier that we can't get frustrated.
461
00:41:38,415 --> 00:41:39,886
I need to learn how to change.
462
00:41:39,886 --> 00:41:41,957
Maybe my prompt wasn't so good.
463
00:41:41,957 --> 00:41:44,919
But then I also want to go back to traditional.
464
00:41:44,919 --> 00:41:49,612
I want to kind ping pong back and forth with traditional legal research.
465
00:41:49,814 --> 00:41:54,497
And then that might give me one case that I can then plug into the AI.
466
00:41:54,497 --> 00:42:05,463
So I can go back and forth between traditional terms and connector searches, natural
language searches, grab onto something that I know is on point there, take that back to
467
00:42:05,463 --> 00:42:10,005
the AI tool, and then kind of feed that in and see what the AI tool gives me.
468
00:42:10,005 --> 00:42:19,084
But we can't, in my opinion right now, as of today, we cannot solely rely on AI legal
research and just
469
00:42:19,084 --> 00:42:21,836
be done with it and say, look how efficient we are.
470
00:42:22,157 --> 00:42:24,240
That's not responsible.
471
00:42:24,240 --> 00:42:31,527
We need to check it against traditional legal research methods using those breadcrumb
techniques that I mentioned earlier.
472
00:42:31,527 --> 00:42:38,344
I think it will get better, obviously, but for now, I would not feel comfortable using it
just on its own.
473
00:42:38,502 --> 00:42:44,747
So how do you maintain that balance between quality and accuracy when you're leveraging
these technologies?
474
00:42:44,747 --> 00:42:47,418
it, I mean, cause it, is it really a time savings?
475
00:42:47,418 --> 00:42:53,593
Like I have seen, I talk a lot about Microsoft co-pilot and I think Microsoft has a lot of
work to do.
476
00:42:53,593 --> 00:42:57,115
In my opinion, it is the internet explorer of AI platforms.
477
00:42:57,115 --> 00:42:58,836
It's not very good.
478
00:42:59,397 --> 00:43:00,868
It does have some advantages.
479
00:43:00,868 --> 00:43:06,822
Privacy is, you know, their terms of service is airtight, their integration.
480
00:43:06,996 --> 00:43:15,751
into the M365 suite is great, but the output compared to what I get with ChatGPT and Claw
just isn't.
481
00:43:15,751 --> 00:43:27,398
So I fall back a lot to those tools, but like how should folks be thinking about
maintaining that balance with quality and accuracy when they're leveraging these new
482
00:43:27,398 --> 00:43:28,206
tools?
483
00:43:28,206 --> 00:43:31,606
Yeah, fall back to Chat GPT is my go-to still.
484
00:43:31,606 --> 00:43:38,726
I try to use the most up-to-date version of Chat GPT, although I haven't done the $200 a
month version yet.
485
00:43:39,586 --> 00:43:44,246
Yeah, I'm with whatever I can do for my 20 bucks a month.
486
00:43:44,246 --> 00:43:51,746
But as a writer, I'm not talking about legal writing right now, but I do a lot of writing
about my books and things like that.
487
00:43:51,746 --> 00:43:55,246
And I've come up with a protocol, which I...
488
00:43:55,246 --> 00:43:59,686
encourage people to balance accuracy, but what was the other word you used?
489
00:43:59,686 --> 00:44:02,466
Accuracy and quality.
490
00:44:03,886 --> 00:44:06,706
It's kind of using ChatGPT.
491
00:44:06,706 --> 00:44:15,146
What I use it for is language or fact checking, which I know I shouldn't use ChatGPT for
fact checking, but I'll ask it a really obscure fact.
492
00:44:15,146 --> 00:44:24,934
I'm writing a travel memoir right now, which has nothing to do with law, but I'll have a
very obscure question I want to ask it about, can I see the Coliseum?
493
00:44:25,176 --> 00:44:27,348
from this particular spot in Rome?
494
00:44:27,348 --> 00:44:30,110
And I think that's kind of a cool question to ask Chachi BT.
495
00:44:30,110 --> 00:44:37,757
Like geographically, because I remember seeing the Colosseum when I was standing in a
spot, but I don't know if I'm misremembering.
496
00:44:37,757 --> 00:44:42,201
So then I'll ask Chachi BT and it'll give me this awesome answer that I want to use.
497
00:44:42,201 --> 00:44:45,583
But then something in the back of my head is like, I need to check that.
498
00:44:45,824 --> 00:44:49,887
And then I checked it and thankfully it was true.
499
00:44:50,408 --> 00:44:53,090
So I think we have to constantly have that.
500
00:44:53,090 --> 00:45:00,850
that bobbing and weaving, that back and forth of getting excited to use these tools
because they could level up our creativity.
501
00:45:00,850 --> 00:45:04,828
I mean, it's made me so, it's enabled me to stay in flow.
502
00:45:04,828 --> 00:45:09,681
just like that expression when I'm writing without getting sidetracked if I'm stuck on a
word.
503
00:45:09,681 --> 00:45:10,642
I love it for that.
504
00:45:10,642 --> 00:45:14,187
I can ask it for 10 words or 20 words or 100.
505
00:45:14,187 --> 00:45:15,785
It doesn't get tired.
506
00:45:16,086 --> 00:45:17,587
But then I have to check it.
507
00:45:17,587 --> 00:45:23,202
If anything in the back of our minds is like, huh, that sounds like it might be too good
to be true or.
508
00:45:23,202 --> 00:45:32,887
sounds slightly off, we just have to have backup protocols and then bounce out of AI into
traditional research, regular Google, right?
509
00:45:32,887 --> 00:45:41,772
Or like, other resource, other your go-to, not that Google is always accurate, obviously,
but other sources to check.
510
00:45:42,152 --> 00:45:53,458
In the law firm world, if you don't have the time or your billing rate is too high for you
to be the checker, establish a role for someone in the law firm to be the checker.
511
00:45:53,810 --> 00:46:05,135
Like right now I'm proofreading my entire book manuscript basically because I like to do
that, but it's very time consuming and I have to change kind of my workflow.
512
00:46:05,395 --> 00:46:17,090
But setting up protocols, setting up checklists, talking about this in our law offices to
make sure that everybody, you mentioned staff earlier, be inclusive, include everybody in
513
00:46:17,090 --> 00:46:22,336
the conversations because we all should be experimenting with these tools and
514
00:46:22,336 --> 00:46:24,509
and not waiting until they're perfect.
515
00:46:24,509 --> 00:46:27,722
It's much better if we just get to know them.
516
00:46:27,722 --> 00:46:29,915
I like to call it shaking hands with.
517
00:46:29,915 --> 00:46:38,145
And let's shake hands with these tools and get to know them, introduce ourselves to them,
let them introduce ourselves, introduce them to us.
518
00:46:38,145 --> 00:46:42,790
And we can probably accomplish great things if we approach it that way.
519
00:46:42,828 --> 00:46:43,398
Yeah.
520
00:46:43,398 --> 00:46:46,420
Well, we're almost out of time and we had so much to talk to you about.
521
00:46:46,420 --> 00:46:55,635
I want to touch on one thing because I thought it was really fascinating when you and I
last spoke and that was like how future lawyers are going to train like athletes and
522
00:46:55,635 --> 00:46:58,038
performers, like expand on that concept.
523
00:46:58,038 --> 00:46:59,539
Okay, I love that concept.
524
00:46:59,539 --> 00:47:11,854
I wish I could go back, you know, 16 years and 20, 30 years and treat myself like an
athlete because, you know, in athletics and in performers like musicians, singers,
525
00:47:11,854 --> 00:47:16,085
dancers, etc., there's not a one size fits all training model.
526
00:47:16,085 --> 00:47:27,404
And unfortunately, I think in the past, know, legal education and legal training has sort
of promoted this one size fits all you have to be this type of person to be a good lawyer.
527
00:47:27,404 --> 00:47:39,899
And I think in the future, especially now that AI is in the mix, if we can all treat
ourselves and have the powers that be treat associates and young lawyers like athletes and
528
00:47:39,899 --> 00:47:41,100
performers.
529
00:47:41,100 --> 00:47:46,332
Athletes and performers don't just focus on the one skill that brings them glory on the
field or on the stage.
530
00:47:46,332 --> 00:47:51,444
They focus on kind of holistic, multi-dimensional performance.
531
00:47:51,444 --> 00:47:57,106
And if they are struggling with an aspect of that performance, they have coaches.
532
00:47:57,142 --> 00:48:00,385
and trainers to help them get better at that.
533
00:48:00,385 --> 00:48:03,568
Even elite athletes struggle, right?
534
00:48:03,568 --> 00:48:05,188
Or they want to improve.
535
00:48:05,188 --> 00:48:14,536
I've read a lot of books by Phil Jackson, know, the famous coach of the Bulls and the
Lakers, I think.
536
00:48:14,577 --> 00:48:19,020
And he talked about really understanding that every athlete is an individual.
537
00:48:19,020 --> 00:48:26,094
And I think if we could start really regarding every law student and lawyer as an
individual with individual strengths.
538
00:48:26,094 --> 00:48:37,014
and individual anxieties and challenges and talk about all that openly instead of kind of
promoting this fake it till you make it or don't show weakness mentality.
539
00:48:37,014 --> 00:48:49,972
I love admitting what I'm not good at because then I can get help and study and learn and
be a lifelong learner and hire a boxing trainer.
540
00:48:49,972 --> 00:48:52,913
in my 50s to help me become an athlete now.
541
00:48:52,913 --> 00:48:59,526
And now I'm able to step into those performance arenas and speak to hundreds, sometimes
thousands of people.
542
00:48:59,526 --> 00:49:06,029
And I never could have done that when I was 25, 30, because I didn't know, I was just
faking it.
543
00:49:06,069 --> 00:49:13,793
So I'm a huge fan of let's all treat each other like athletes and performers and focus on
multi-dimensional fitness.
544
00:49:14,113 --> 00:49:16,314
It's not a one size fits all.
545
00:49:16,984 --> 00:49:17,715
profession.
546
00:49:17,715 --> 00:49:20,858
Let's really understand one another's strengths.
547
00:49:20,858 --> 00:49:22,630
Let's champion each other's strengths.
548
00:49:22,630 --> 00:49:28,647
Let's help each other really level up our performance and actually enjoy it too.
549
00:49:28,714 --> 00:49:30,485
So we can do it long time.
550
00:49:30,485 --> 00:49:35,793
well that's great advice and Phil Jackson, he had a little bit of success in the NBA.
551
00:49:36,356 --> 00:49:38,730
I mean, he won six championships with the Bulls.
552
00:49:38,730 --> 00:49:42,676
I don't know how many he won with the Lakers, but I think he won a few there.
553
00:49:42,830 --> 00:49:44,407
got a book called Eleven Rings.
554
00:49:44,407 --> 00:49:45,834
So he's won at least eleven.
555
00:49:45,834 --> 00:49:48,897
Okay, wow, that's incredible.
556
00:49:48,897 --> 00:49:52,641
Well, you are an absolute pleasure to talk to Heidi.
557
00:49:52,641 --> 00:50:01,930
We missed a whole section of the agenda that we were going to talk through, but I would
seriously love to have you back down the road to continue the conversation.
558
00:50:02,006 --> 00:50:02,637
I would love that.
559
00:50:02,637 --> 00:50:04,250
It's been a pleasure talking to you as well.
560
00:50:04,250 --> 00:50:06,286
It me excited about the future.
561
00:50:06,286 --> 00:50:07,826
Awesome good stuff.
562
00:50:07,826 --> 00:50:11,766
Well listen, have a good rest of your week and we'll we'll chat again soon.
563
00:50:12,606 --> 00:50:14,560
Alright, thank you. -->
Subscribe
Stay up on the latest innovations in legal technology and knowledge management.