This assumes all human therapists are ethical and never make mistakes, and that all of their offices, notes and data syatems are secure too. All security is porous.
Depending on who you are an AI chat might just be a less tedious journal which can obviously be better than not journaling, I still find it sorta weird too but the ridicule is unfounded imo.
From a privacy perspective it’s likely terrible/ terrifying but given the majority of people are already transparent for the most part, they are at least taking some real world value for their increased transparency.
Not sure…
- An AI therapist can already easily handle general good mental advice, such as reducing cognitive load, perspective shifts, alternative methodologies, education of standard mental needs, processes and whatever low-level stuff we can benefit from. 2. hooman therapists are a coin-toss. Most are completely crap and build their business from archaic and/or wrong theories and personal ideology/feelings. 3. whatever flaws AI have now, is going away really really fast.
Hooman therapists cost a lot of money, and a shitload of people won’t get any help at all without AI.
So, I think it is fine. The potential damage is far less than no help at all. Just use a little common sense and don’t take anything as a Gospel - just as when we see hooman therapists.