10 Comments
User's avatar
Alisa Belmas's avatar

I like your take on the connection between AI lies and its efficiency. As you've noticed it often gives the answer we want to hear - I think it gets away with it because it plays into our confirmation bias, so we don't challenge its answers. Plus it still has some 'magical' quality to it. No surprise it easily blindsides us with made-up answers

Expand full comment
Style Analytics's avatar

Totally agree with this! I also think that this is the problem with using ai as a therapist - it’s always going to confirm what you’re already thinking - which cannot be the best approach for every scenario

Expand full comment
Matjaž Horvat's avatar

This has sometimes been my experience as well. But sometimes, it pushes back pretty hard, especially if I’m using it to “discuss” some topic where I happen to have unusual/unpopular views lol

Expand full comment
natalia's avatar

I know this is gonna sound like "kids these days" but I actually enjoyed learning about the processes in (psychological) research and their relative strength and weaknesses. Putting AI alongside other methods with "efficient but inaccurate/made up" should give people pause. I think there is a wider issue in society of people valuing convenience above anything else (see food delivery, taking cabs over asking a friend for a lift, Amazon) which ruins our connection to the world and other people. You have been compassionate to your commenter and intern, and I hope that's stuck with them. At the same time the way companies champion it is rage inducing. Yes it can be helpful but it's not automatically better 🤦🏻‍♀️

Expand full comment
Claudia's avatar

“Essentially, models such as ChatGPT are programmed to do things in the most efficient way possible”

Not to be like um actually 🤓 but this is not accurate — LLMs are just prediction machines which will return the most probable next word (with some randomness). The cause of inaccuracy is not due to a drive to be more efficient per se, but rather a mismatch between what was most seen in the training data and reality. This is a simplification ofc but it’s important to know that most of the chat models right now are not actually “analyzing” anything.

Expand full comment
Audrey Vinkenes's avatar

This was such a great read! People's reliance on AI makes me want to scream. One of my consulting clients told me in a meeting last week to exclusively use AI instead of doing my own copy. Unfortunately at the moment I can't afford to lose that client but the whole thing makes me feel sick, especially now that I can't use any of that "work" for my portfolio, and it's definitely not the best way to do things. I'm honestly tempted to just keep doing it myself because I can't stand the idea of using AI to that degree.

Expand full comment
May Spark's avatar

The other issue I feel about using AI to do ‘mundane’ tasks like interpreting qualitative data, identifying themes, etc is that the person doesn’t fully develop an understanding of how to do it independently. They need to learn this skill and exercise it regularly to integrate it in their work pattern. It’s time consuming and maybe boring at the time but it’s how you master something. Words directly from living experience have nuance and subtlety and meaning. That context is removed through AI.

I work as a social worker with people living with dementia, we have been having discussions about whether it’s appropriate to use AI for note taking, so I will share your article, thank you.

Expand full comment
Kara's avatar

Amazing! Sharing with my students! (I teach qualitative digital research to undergrads and cannot stress enough that AI is doing no favors when it comes to interpretation)

Expand full comment
Chana's avatar

As a developer I can clearly tell when someone copy and pasted code from an AI. I will admit for frontend frameworks like React the AI is very good.

I can’t imagine it ever being able to add code to a legacy system and reason with a product manager. 🫢

Expand full comment
Clementine's avatar

As a literature student, I used to rely on AI a lot because too many books and exams were just around the corner.

Most of the literary evaluations was pure garbage, even worse. Finally, fourth semester, I manned up and started studying from human blogs.

Hot damn I saw improvement. Hot damn I enjoyed

Expand full comment