CollabOasis Clinical Consultation Groups

New Groups Forming Now!

10. Revolutionizing Therapy: A Panel Discussion on the Pros and Cons of AI in Clinical Work

“There's this standard already of professional competencies. If you go into a new modality or you're using a different medium like with telehealth, you need to be proficient and have professional competency in what you are utilizing, whether that's a modality or mode of communication. And this fits into that as well. Might you need to be informed and equipped with the knowledge to be able to make informed decisions that are going to have the best possible outcomes in terms of client care as well.” – Liath Dalton

Welcome to the colleague down the hall podcast. This episode is sponsored by the collab Oasis clinical consultation groups. Hi, I'm Jeanene Wolfe, and I'm your colleague down the hall, I have a passion for helping fellow therapists get the clinical and collegial support, we all need to do this work. And wow, it just keeps getting harder every day. I'm the founder and facilitator of the collab Oasis clinical consultation groups. I have been a social worker for almost 30 years, and I own a successful solo online private practice, more of us than ever are practicing and solo or online practices. And we all need colleagues to process cases with commiserate with on those really hard days and also to celebrate our successes with in this podcast, I'll bring you insights about trends and changes in our field and sit down with amazing therapists who are doing amazing work will discuss fictionalized cases, ways to practice sustainably and of course, there will be plenty of laughing I love laughing with friends. I'm so glad to have you as one of my colleagues down the hall.

Hello, everyone, you are listening to another episode of The colleague down the hall podcast. I'm your host Jeanene Wolfe, joined today by a panel of mental health colleagues to have a respectful conversation about a new technology that is showing up everywhere these days. I'm of course talking about AI. We'll get into the ways it can be used in private practice the pros and cons of its use in various clinical ways. And how once again, tech is proceeding regulation. So we need to do our due diligence when we are considering using this. Welcome everyone, we're gonna take a minute and have each of you introduce yourselves. Tell us a little bit about your work in the field of mental health and anything else that listeners might want to know about you.

I'll go first. I'm Kim Tolson, I'm an LCSW. And I'm a therapist, but I'm also a coach to therapists, I help her therapists learn about insurance billing in a private practice, and also how to be a traveling therapist if they want to take the show on the road. But most recently, I've really gotten involved with AI, especially with my coaching business. And as I was using it, I figured out like I can write my progress notes with it, if I'm careful with pH i and how I do it. So that led me to write a course because everything I love I feel like obsessed with I have to write a course about so I wrote a course and I made it for therapists how to use it with their documentation and be safe with it with pH I.

Great. If you want to go next.

My name is lie of Dalton. I am the director of Person Centered tuck, which is an organization that does continuing education, training and provides material and resource support around legal, ethical technological use within practices with a focus on HIPAA security, compliance and risk management. This is a topic that has a lot of implications in terms of how to utilize this technology in a legal and ethical way, and what needs to be considered with regards to that. So I'm glad to be here today with all of you. Great, thank you. All right, Jenny.

My name is Jenny Hughes and I am a trauma psychologist and I both practice clinically seen adults who have experienced trauma and are dealing with things like PTSD. And I also work to support fellow trauma therapists in my membership, the brave trauma therapist collective. And the way that I have been exploring AI recently is in helping me to be able to better manage all the things that go into diversified income streams and managing content and creating things and really exploring the pros and cons and the ethical ways to use that in a non clinical therapist facing fashion.

My name is Tamara Howell and I'm a UK qualified psychotherapist based in France. I also create courses and coaching I help therapists build private practices online and online businesses. I'm also the host of Thera bundle. I would describe myself as a light AI user like an experimental user. So I've tried a few things and the places where I have used it in a way that I found helpful are things like emails, cause creation. I even and tried out AI headshots, images which Korea's legal legal statements course planning. And so I would say I'm like an I'm an early, early stages of experimentation. But I have big plans for it. So I'm really interested to be involved in the discussion today. Yeah, it's great.

Well, it's so good to have everyone here. As most of us who are in the therapists, Facebook groups know, this is popping up like literally every five minutes in a group somewhere, asking questions, debates, the complexities of it, all of those types of things. And so I think it's a really important conversation for us to have. And I also just like to share that most of us that are in those groups are private practice owners, solo private practice clinicians. And so it's up to each of us to choose what works best for us and do our own due diligence and be able to justify if someone investigates as how we made our decisions and how we're using these things. And so part of what I wanted to do today was kind of explore the different ways it's being used. I started using it, Kim Tolson introduced me to it. And I was literally like, awake all night, one night thinking of prompts. Because once you once you put a prompt in, and it like spits out something in three seconds. And it's so amazing that I was like trying to sleep and I just kept thinking like, I want to ask it this, I want to ask you this. So I also have not really been using it in my clinical work. I might I don't know, I actually haven't even considered that yet. Because right now I'm using it with the membership program that I have for therapists to get kind of consultation. So I think we, you know, we did a good job of talking about the way each of us is using it. Let me just throw this out there. Because in doing some research for us to meet today, I found that there are lots of services already out there that are being marketed to therapist, even a big hospital system, University of Kansas Help Center to serve systems has just adopted an AI program. It was very interesting to me, because I've been thinking sort of like small potatoes. And when I was looking into this, I realized there's actually companies where you can pay a fee, you do your virtual session on their platform, they record it. And then they spit out your progress note. And they say it's HIPAA and GDPR compliant. So it's out there way more than I was aware of and anyone have any thoughts on that?

I think it's amazing. I mean, I think it's a time saver, I think especially if hospital systems can ensure that it's going to be HIPAA compliant and protected. I'd love to hear what live thinks about that. But it's just going to make the workflow so much easier. And I feel like it allows you to have more client time, which is exciting to me. So you're not spending hours, like doing notes and documentation and that sort of thing.

Absolutely. Yeah, there's one platform in particular that specifically for mental health practitioners, that's available to solo practitioners as well, like, it's not only available on an enterprise tier that does HIPAA Business Associate agreements, where the training model for the AI transcription, as well is all based on therapist input. So it has that clinical context, where it's a lot more robust than chat GPT, for example, where the machine learning has a very different input in terms of the information that's fed to it. So I'm really excited about seeing those services that are designed to address the security considerations, but also are less likely to have the ethical implications, negative ethical implications, because of bias that can occur when it comes to machine learning and what the input content is.

It's so interesting, because as you're talking about it, I'm thinking, wow, that could really like improve the accuracy of notes. It could, you know, there's so so many times I've seen like a clinic letter written about and there's so many mistakes, or, you know, a doctor has, like heard something I didn't say or heard it wrong, or you know, maybe copy pasted or whatever, and it could change so many things. But I also had this experience of meeting with my accountant and then having their AI note taker in the meeting with us. And it felt like someone was in the room with us. And I know that, especially with AI where you're giving prompts like chatty PT, it can start to feel for some people a bit like you're talking to someone you're kind of making. I mean, I know that sounds like a bit weird, but it can kind of feel like you're having a conversation. I think that's a really interesting aspect. And I'm really interested in how that's going to affect for example, if we use a platform that does our notes, things like that, how does that affect the content of the therapy, the relationship with the with the client? Yes. Go ahead.

I also think and while I'm really excited about AI, I think that it's important to acknowledge that it can be a good starting point. But that especially in a clinical context, we have to be reviewing this stuff. So as a psychologist, you know, I'm trained to do psychological assessments, and we've had computer scoring for years and years and years, we don't just put the computer scoring printout into an evaluation report, because we have to interpret it, we have to use our knowledge and our humaneness to interpret that and then communicate it. And so even as I'm using it in Brave, for example, which is a non clinical place, it helps you with that, like blank page syndrome to get things started, I've heard someone say, and this is a quote from someone else, it'll get you to a solid C, but it's your job to take it to like a B or an A Right. And so we have to still have our humaneness there. And especially clinically like there are things that we intentionally do not put in the record, because it could be harmful to the client. And so we have to also we have to have that balance. We can't just have one or the other, I think is Yeah, comes up for me.

Right? Yeah. And as far as well, as applications go, I think there's a huge gamut of how we can be using this. So in a private practice, we can use it for marketing to get the right text copy for our ideal client to learn more about the population we're working with, to help us write social media postings or blog posts or things like that. Clinically, when I first started hearing about this, what came to mind was me was widely treatment planners and progress note planners, that you have this template, that's the starting point that you pull those pieces from. And I was like, Well, that would be a good use of this with, you know, not putting a specific client's information in there. But you know, write a treatment plan for a middle aged female who has anxiety related to xy and z. And then like Jenny was saying, then we have that to build off of index sort of get the wheels turning, because one of the things that I see with therapists all the time is that we are so exhausted, especially since the pandemic, that writing the notes has become a huge problem for therapists, and people are getting behind. And that's never a good thing. And then when you're tired, it's hard to even like think of like, what, what do I even put in a note, I feel like I've never even written a note, I'm so tired, I can't even like call those things to mind. So to have that jumping off point, would be a helpful thing. But then all the way to the far end, where you're actually having it, listen in on the sessions, or create the progress note, even do some progress trackers, those types of things. And you have to have informed consent for your clients. So they know that this AI bot or whatever it is, is listening to their session. And that's a huge implication for the therapeutic relationship, as we all know. And so those are all it's a wide range of issues to be looking at here and ways we can use it.

Absolutely, I want to mention that one of the things that has come to my attention with regards to use of chat GPT, in particular, for progress notes, is that folks are trying to balance the HIPAA considerations, and not put any identifiers into the prompt so that it does not, they aren't giving pH i to a third party service provider that they can't get a business associate relationship with. The result though, then is that what is fed into the prompt is a lot more limited than what actually occurred in the session. And so the examples I've seen a number of examples of here's the prompt I gave, and here's the soap note, for example, but it's put out where it is filled in a lot of content, including symptoms client to an effect, you know, visible emotional distress and upset, what the prompts were for how to respond to the situation that the therapist provided. Of course, that's something that then has implications because if that isn't what actually transpired that should not be going into the client's medical record. And one of the concerns that I have there is that in this context of burnout, right, that that is so real, that it can be compelling to look at a note that looks excellent. And you're like, oh, this really establishes medical necessity. If it got audited the insurance company would be delighted. You know, this is better than what I could write with my current capacity so I'm willing to utilize it. So that's where the responsibility of providers to jump me as you said, bring that human element And is so important. But I would also go further and say, from my perspective, I think, what if we're talking about use application for progress notes, in particular and in clinical contexts, that this is why tools that are designed specifically for this use application where you have a HIPAA Business Associate Agreement where you aren't trying to limit the information that it has access to, is going to be much better all around legally, ethically, clinically.

Right? Yes, I would imagine that listeners who are in private practice will already know what a business associate agreement is. But that's something that in private practice, we have to have with any organization, which we're sharing information with. That's a real simplified way of saying it. And basically, they're affirming that they understand HIPAA, and they're going to, you know, do the things on there, and that they need to do, and it's a protection, but it's also a requirement. And a lot of therapists who go into private practice don't necessarily know that. So that's an important piece to think about. And then also, there are times where we're looking at something, and a BA is not the thing that comes first to our mind that you know, I was looking for a fax service, and then all of a sudden, I was like, wait a minute, I have to make sure I can get a BA, you know, it's that thing that's always there, but it's a good protection. But yes, we need to do our due diligence that we are making sure what is there is accurate and appropriate. And my experience with chat GDP is that depending on what prompt you put in, or the complexity of the prop, like write this from the voice of or write this from the perspective of you can get vastly different outputs. And so yes, I think I agree with life, that having a platform, if you're going to use it for that is one that understands mental health, behavioral health, all those things and can interpret things in a different way. But even then, I can't imagine, like seeing this, they call it a magic note and one of the platforms that listens, and then 30 seconds later, you get a magic note. It was like that sounds very tempting, like live said to be able to be like, Wow, my magic. No, it looks great. I'm just gonna sign it and move on. But we need to make sure that we're not allowing the excitement of that to override good clinical judgment.

Yeah, I totally agree with life because I've created a whole course on this through chat TPT. And, literally, like you said, depending on the tone of voice and what you put in, it's going to fill in a sub note for you, it's going to fill in a whole treatment plan, it's going to fill in an assessment, I did a whole module on how to write assessments with it. And it does it just like assumes, you know, like I did a little fake profile, a fictitious case, and it like filled it in that he was single and his mother was an alcoholic, like it filled in all this stuff. So I made a whole module in the course about that, like professional responsibility, like you have to go through the stuff, you have to take it over to your EHR, copy and paste it over there and then edit it correctly. You know, you can't just like you said, like, be in burnout. Don't be like, Oh, this looks good. And just stick it over there and not think about it again. Because I could see that happening. It's such a good point. Yeah,

I really love this professional responsibility like aspect of this because I'm in France and and in the UK, as well. Like, we don't have Baa, it's really like GDPR police who are going to come and check out like a therapist in private practice, you know, as far as I know, really, unless someone gets ported to the ICO, but so it's really professional responsibility is like a therapist wants to be ethical and compliant. And so I think there's so much responsibility here, but also so much opportunity, you know, making some kind of checklists where we're like, okay, making sure that things that need to be left out or left out or written in a particular way, you know, making sure that everything that's in there is accurate, whatever it is, whether it's marketing, or whether it's to do with our clinical staff. Also, whatever AI we're using checking out like that I know, no one loves this, but reading the privacy policy and understanding what happens with the information, because the information from my chat GPT prompts may somehow appear in someone else's, even if I tell Jad chat GPT don't share the content of this with anyone else we don't know because it influences like and it keeps evolving and learning. And so yeah, yeah, just privacy policies are where it's at.

Yeah, can I actually did an experiment we had put some stuff together and then we you keep this private and not share it with anyone else? And it was like basically, no, this is crowd sourced information, and we'll share with anyone so then I had it writing some copies specific to my program. And at the end, I said, this is specific to my program that I created if somebody was asking for content related to this is this information you'll share? And it said no, that it was writing which was interesting. It was writing copy specific to my program. So it wouldn't automatically and who knows if that's true or not? I just think I'm skeptical because I'm sure they're gonna take over the world any day now. But they seem to imply that no, that would be kept private.

Yeah, I like I asked the same thing. So I use it for a course outline. And at the end, I said, this is my course outline. Will you share this with anyone else? I want it for me. And I said, No, I will not share it with anyone else. But then I've seen someone else's conversation where they said, Please don't use that kind of language about something. And then they said to chat, GPT, will you change how you use that language going forward and charge up to say, yes, how you instruct me affects how I respond to everybody. So yes, there might be nuances on like, they're not gonna it's not going to share my six module titles. But it will affect the you know, what happens later. I love all that nuance, and like all the differences and things like that, but I do think it is really interesting to learn about where it's stored, who owns it, where it shows up in the world.

Yeah, Cam, your Course talks through some of the ethics. Are there some checklists? I haven't been through it thoroughly, it looks amazing. Are there things to help someone guide? Like, how do I want to use it not just okay. Or do you assume they're starting from the place of wanting to do clinical notes?

It's geared towards clinical documentation. So I'm not getting into like, you know, social media posts or anything like that. I might, you know, it seems like I would probably add that on to for marketing your practice. But, yeah, yeah, it's not a checklist per se. But I love that idea. Tomorrow, I might just take it from you. But it does walk you through modules. So it does talk about all the ethical implications to think through like, check regularly with your licensing board? Are they going to start like issuing statements about how you can use AI in your practice? And what the requirements are going to be about that the professional judgment piece, I also have a whole module on like, what is pH i? And what is not pH AI? And how do you differentiate between that like, it's a whole list of that to walk you through, like all of those steps? Before they even get into? How do I use this thing, I've got all of that first, and I've got a big disclaimer, I'd like please read this. And then also, you know, please read this before you proceed. And then also that other part about you know, if anything it fits out is not accurate, you have to use your professional judgment, like we're talking about, and go through all of this in your own EHR in your own HIPAA compliant place where you're changing information. So that's how I've handled it in the course, so far to solving, you know, calling it like a beta launch with this course. Because I'm new to it, too. I'm still learning and trying to figure out the best way to navigate this stuff. You know,

yeah. So I think the recommendation that I imagine for someone who is just hearing about AI, and they're thinking, how can I use this, it would be a matter of looking at the options and saying, Maybe I just want to dabble in it, have it, just write some templates for me, just let me learn what it can do and change it up and see the different outputs that I can get. Or maybe I want to use it in my marketing, but not to first thing to just jump right in there and start having it write notes for you without understanding at all, what multiple people have mentioned how it's going to fill in the blanks about things that may be fiction, and we know that there are people have put in chat GTP something about tell me about the relationship of these two people. And they've made up a whole scenario about this boy, yeah, you know, fight that they'd had, that was completely fiction. So we need to make sure that people understand that this is operating in fantasy. And we're trying to use it in the real world.

Can I ask, Can I ask a question about because Kim, you were talking about the disclaimers and it's just in marketing that's so clear to me that in the bottom of my email, I would say a I helped me to write this or I use Chad TPT to write this on. I think it's kind of cute to say this headshot is 60% Me and 40% I don't know who mashed up with me and I you know, I think that's okay and kind of emails and posts and whatever you know, to say to say that but how do you do that in clinical practice? So when you talk about disclaimers, if you are going to be using it as part of clinical practice, do we need to put that in our informed consent to we need to put that on our website somewhere or

I was just gonna do that? Yeah, I was just gonna say I actually put an informed consent in there and you know, it's a template of course, but you know, with ideas of how you could include this in your informed consent to say I may use AI to assist me in your treatment planning and your clinical care and you know, discuss it with your client. I talked about that in the course to like sit down with your client and talk about this, see if they're okay with it and explain what this means and totally,

yeah, awesome. I got chat GPT to write all my disclaimers for all those different places as well.

I got to make the informed consent for me right the templates I'm like, give me like for informed consent templates to share with the therapists that are going to be taking this course. It's just amazing. It's what you can do with it. Yeah, but yeah. And then, of course, I take that information, and I apply it to what I know about the political world and put it in the course, you know. So it is a combination of my mind and the robot's mind, like coming up with this together, you know,

yeah, yeah. So lay it from the overview that you have, which is different. You work in the mental health field, but you're not a clinician, you're a tech guru. So give us some thoughts about what we've been discussing so far.

The main thing, because I think we've kind of honed in on the primary use application that I think has most people's interest piqued, is assisting with progress notes, because that's this huge burden. And and something that is creating friction for a lot of folks. Right. So in terms of that, the main consideration is always who is handling your information? How has it been handled, what happens with it, and when we're considering what is the threshold for what actually constitutes pH I protected health information need to be very precise and thorough in that understanding. Because otherwise, if you don't have an accurate conceptual framework of what constitutes PHSI, it is very easy to suddenly have put a bunch of protected health information into another system where you don't have the necessary legal and ethical assurances in place that they are going to handle that information according to the applicable standards and requirements. So I think, because there are different platform options, right and different ways that people can utilize it. So I want to end wanting to draw people back to what kind of a foundational responsibility is that's going to help each of you make more informed choices as you were evaluating how to use something if it's usable, and figuring out what your responsibilities in correct utilization are? So I love Kim, that that was a core component of the course that you did. That's wonderful to see. And then I do think my perspective would be that utilizing tools that are specifically designed to provide those assurances would be, you know, the PCT recommendation as what to what's going to be most protective for people. And, you know, we've been through this sort of new technology emerges, it's not yet far enough along necessarily when it first launches to be designed to specifically be meeting the needs of mental health and healthcare professionals. And so some things have to catch up that in that interim period, I've seen a lot of kind of mental gymnastics occur, in which folks will try to, well, this isn't pH I, or this makes it okay, and kind of bargaining in that way of like, oh, I can use this service that doesn't provide a business associate agreement or something like that. And we've gone went through that with email, then phone service, and texting programs, et cetera. So I'm hoping that maybe some of the lessons learned from going through those processes will then apply to this of just it makes your life easier to use the right tool that completely covers your legal and ethical needs, of course, there's still going to be correct usage responsibilities. But it makes more sense to do that than be because it's not a tool that specifically does meet those needs, that you're trying to kind of contort how you utilize it in order to make it feasible.

Yeah, that makes me think of I was just working on something yesterday related to therapists and their finances and money and how it's so closely connected to our burnout, and how we can be very tempted to DIY a lot of stuff to save money, right, so that we're not paying someone or something to do it for us. But that ultimately, one is it actually saving you money in the long term. But when it comes to things like this, we have to be so careful to protect our clients and protect ourselves that even if you don't have any ramifications right now, who knows what's gonna happen five years from now when there really are different changes in legislation and things like that, like we have to have a long term vision of this. And it's worth investing in high quality products and services and training that's going to really make sure it is protecting us and giving us that coverage and cya that we deserve and our clients deserve over the long term because AI is not going anywhere. It's been here for a long time. Now. It's just so different because we're all talking You know, Chad GPT. But it's I think that investment in self and business and the people that we care for is worth way more than saving a, you know, diamond Nicoline it to DIY.

Absolutely. And I love that point about the self care or cya. And one one thing that we say all the time is that cya is self care. And in this whole context, you have to think of the like, what's your cost benefit analysis? And how much cognitive overhead? Is it taking, like, we want to be evaluating what is adding to you? what is it detracting does that balance out? And that's something I think that will probably vary provider to provider, as well.

Think something that occurs to me when I was listening to you is this is it's a whole new skill, yes, it might be a time saver, it might even be a money saver, right? And but it's also like a whole new skill, to learn how to do things, and especially to learn to do them ethically. And outside of the US, where we don't have different countries have different licensing and privacy laws and things like that. But in some places, like the UK, for example, where psychotherapy, and counseling isn't regulated, and it is, like totally based on personal responsibility, that's also an added load. It's also like hard work to learn a new skill and also have to think about like, Okay, should I be doing this? And like, is this difficult, you know, how do I learn the skill, but also feel like it's okay, and kind of, you know, manage all of the nuances of it, and all those different aspects. So, so like, I know, I sound like a real critic, and I'm very excited about it about AI. But I just as everyone speaking, I'm thinking like, Okay, well, yeah, the cost benefit analysis also has to include like, how much time it takes to learn this skill as well.

Yeah. And you know, tomorrow, one of the reasons I wanted you here today is because you are a very critical thinker, and you always bring those as life obviously, and not that the other ones aren't. But that's, that's a really strong skill. You always like think of those little angles that I don't necessarily think of those were some really good points there. And I think it with so many things that we can do with our private practices, there are things that we can do for free all the way to really expensive paid options. And it's a matter of deciding how am I going to use whatever this thing is that I want to pay for, and learning how to use it learning that new skill set that we've been talking about? And is it worth the investment in the money if I'm going to be laying awake at night thinking did I really proofread that progress note it generated for me good enough, before I signed it and put it like hit you know, hit Enter. In my note, most therapists that I know have set the bar really high, we want to do things appropriately, ethically, humanely, compassionately, all those things. And so I can imagine myself literally, like not being able to sleep on one night because maybe I was rushing. And so maybe I don't click fine, right? When it generates it, maybe I have a policy that I need to wait an hour, like when I'm angry, and I type a text to somebody and I don't sit hit Send because I know an hour from now, I'm gonna be like, Okay, I'm a little more calm. That's a harsh text. I'm not gonna send that. So So that sort of, like checks and balances process?

Well, I think there's this standard already of professional competencies. You know, if you go into a new modality, or you're using a different medium, like what's telehealth, you need to be proficient and how professional competency in what you are utilizing whether the modality or mode of communication, and this fits into that as well. Right? You need to be informed and equipped with the knowledge to be able to make informed decisions that are going to have the best possible outcomes in terms of client care as well. Can the fact that you have created a training to help people get that knowledge and start identifying what questions they need to be asking what needs to be evaluated as they're stepping into this new territory, I think is the direction that everyone needs to be going in terms of as we step into this training, going to experts, consulting with peers, really thinking it through and making informed decisions and ensuring that it's fitting with even if the standard or the ethics codes aren't there with us yet fully across the board about AI use that there are other components of the ethics code that should be informing how everyone's thinking about this and approaching it.

Yeah, and this kind of falls along the lines of the many, many, many gray areas we have in our work and the decision making process. In the consultation with other therapists, and all the due diligence things that we do to make sure that we're making the right decision. And this is kind of a shiny object syndrome type thing that's really exciting, especially when platforms are using phrases like magic notes, you know, and it's being discussed, and people are putting copies of templates and treatment plans and stuff like that, and Facebook groups, and everyone's like losing their mind and how amazing that is, we do need to take that step back and think, Okay, before I jump in, what is my decision making process? Like I said, what are the other tools that I have in making these types of decisions, because if you're using it for progress notes, and templates, in the treatment plans and stuff, you are making a clinical decision, and people need to be really clear about that. Because that is impacting the clinical care that you're providing to that client, as well as the protection of your license. So you need to be looking at that as well, you know, not doing something that's a shortcut today, that you're going to end up losing your license over. And then all the hard work you've put into getting where you are, is gone.

And I want to just cuz this is a nugget of truth and guidance that we go back to a lot, which is something that Eric Strom who's the attorney and a clinician that we collaborate a lot with says so frequently, which is it is okay. To be wrong. It is not okay to be unreasonable, that if you you know, when you have an ethical conundrum, you need to make the best decision that you can based on everything available to you and due diligence. So you need to be reasonable, even if the end result you turn out to be wrong when more facts are known, right? If you were reasonable, and can justify how you came to that decision, that really we're aiming for if we don't have absolutely clearly defined parameters around everything, and we're having to make those choices in gray areas, we want to be reasonable.

That's so good. That's like the most amazing point to be closing this out when you just like summed it all up.

It's really courtesy of Eric, but I'll use it freely.

Yeah, yeah, absolutely. Yeah. And there's just it's high risk work that we do, I think probably none of us realize that once we go into the field, and we just want to help people. But the more you're in the field, the more you realize the risks that are involved, and lots of different layers, lots of different perspectives. That's why I wanted to have this conversation today to raise some of these points, to encourage therapists out there to be critical thinkers and to think about their ethics, and think about their values and think about the therapeutic relationship. And I'm not sure I would be comfortable seeing a doctor, if I knew that AI was listening in to every word I said, I know for sure I would leave things out. And the implications of that are huge in our field, because people need to be able to tell us anything and everything. So those are all considerations that we need to have. And if he even if you're using something like chat, GTP and you're not specifically putting in information, trying to be vague, but putting in information about a specific case, making sure that that's in your informed consent in some way that the use of AI is something I periodically use in my practice. That way, they at least know that that component is there. And I think that's a real important consideration to have.

Okay, and I really love what live said about if you're finding that you're contorting things to try to make it fit. Right Oh, get one of these programs that life is talking about that has to be right, is HIPAA compliant and just spend the money on that, if I agree in that, like, you know, ethical gray area of it feels weird, it probably is, and

take some time to really go through your own practices, standard operating procedures or SOPs, so that you have that documentation of how you are managing it, knowing that it's a dynamic document when it comes to something like AI. And so you can make copies and show how you're changing and evolving. But that's another important thing that's behind the scenes, like we have to make sure it's in our consent that we talk to our clients about it, we don't just send them the consent and assume that they read it. And then make sure that we have our own paperwork. I have like this huge binder over here that I use, as all of my stuff that is going to be outlining how I'm using it right now. And then keep iterations of that as it's changing.

Absolutely. Yeah. And that's such a great point, Jenny, like, I will actually joke with my clients at the first session that I doubt you really read the full seven pages of my informed consent. So let's go through point by point and let me just, you know, highlight them to make sure that we're on the same page here at because we can say they signed it, but it's really our responsibility to make sure they really understand the implications of that.

Absolutely. I mean, that has major implications for the ability to maintain Amen effective therapeutic alliance as well, because if that is not disclosed, or it's not, or if it's in the informed consent, but was not discussed, and they didn't actually see it, and then find out later that can understandably feel like a huge betrayal of trust, that can completely rupture a therapeutic alliance.

I was just gonna say just as just like part of our social media policies as well. And I, you know, I was thinking that when people listen to this, and there's so many other kinds of ways that we can use it as well, like writing SOPs, and creating some kind of template, even, you're getting into more information about SOAP Notes, finding, like ways of writing phrases, or making things easier for us. And I know I've been like, a little bit critical, but I just want to mention that for people who find it really difficult to start from a blank page, people that if you're gonna use it, like make it work for you, you know, make it work for you in a way that feels really comfortable and feels ethical, but also makes your life easier. We don't just need to use a tool because other people are using it.

Absolutely. Okay. Does anyone have any final thoughts they'd like to share? I mean,

I think it's a really exciting frontier, that there are amazing benefits that it has the potential to bring to practitioners, but also to having better treatment outcomes and client care. Those potentials, though, can only really be realized if it's approached from a very diligent, ethical, responsible way. And so the fact that this conversation is happening, that people like all of you are thinking about it and talking with your colleagues about creating resources that around it is hopeful because that means that we're, there's at least a component of the movement around utilization that is looking at the right thing, so that that potential can be harnessed. And the negative potentials aren't realized or or realized to the full extent possible, I guess.

Yeah, yeah. And additionally, this is to the general public, this is a new technology, and people having wider access to it. And so we also need to stay on top of what's happening with our professional bodies, and our licensing boards and all the different resources we have at our disposal that are giving us guidance, because they are doing the research and looking into the potential harms and hopefully identifying things that maybe we haven't noticed yet or haven't occurred to us. So we do if we're going to be using this, we need to continue with that education, just like any other continue education that we do in our field, this if you're going to use it needs to be added to your list of I need to make sure this is something I'm staying on top of. So I maybe five years ago, or five years down the road, I might say, Well, I started out using it responsibly, but I didn't really stay on top and understand that these five things have now come out that I didn't know about and now I'm not using it responsibly any longer. Well, thank you all so much for taking the time to join me today. And you know, I'm on this thing about trying to show how people can come together and have respectful conversations because we see so little of that and our society. So I so appreciate that we can once again demonstrate that I don't think we had any Laysan disagreements anywhere. But just even talking through the issues and the topics and understanding that we are limited but we're sharing what we've learned so far. We're sharing our thoughts and perspectives. And now the listeners it's on them to take that next up and figure out if they're going to use this, how they're going to use it and in all the other tips and thoughts that we shared today. The information for each of our panelists will be in the show notes so you'll be able to access them and the programs that they offer. For everyone listening. There are new episodes of The colleague down the hall podcast released every Thursday on all major platforms. Please remember, our work is hard. It doesn't have to be lonely. Thank you so much for listening to the colleague down the hall podcast. For show notes, links and downloads, head over to colleague down the hall.com where you'll be able to learn more about getting the clinical support you need and resources to help you work in a supportive sustainable way. If you've enjoyed this episode, please share with your therapy friends and colleagues. subscribe to the podcast and if you love this episode, please leave a review.