I find that discussions of AI are rife with forced metrics that I cannot imagine why they matter to me or anyone for that matter. Competitive creativity? What is the real world application of that? Most of these metrics are only useful for bragging rights. To me creativity is when you are able to solve a real problem that you have not seen before. To me, that is the creativity that matters. So what if someone else has already solved that problem with that method. You may have noticed, there are billions of humans and that is statistically extremely likely anyway with or without AI. And if you are truely concerned about the true creativity that advances society, those ideas almost always comes when the creative person decides on a problem themselves. Not when people are asked to combine a tennis racket with an alarm clock.
Thanks Jing. I am a Professor and lecture on whether students should 'outsource' their creativity to an LLM. Your articles highlight the fact that while the use of ChatGPT, or other models, makes students think they are more creative, the end result is often generic/similar to other students. The results: many students in my business class come up with the same concepts for new products. I warn them about the issue of idea convergence, but it is difficult to combat that. Students want to maximize efficiency in getting assignments done, and they *think* their LLM-generated ideas are unique! I warn them about that phenomenon -- but the temptation to outsource is too high (*Sigh*). I now have a grading criteria that if their ideas are similar to others they lose marks. They definitely *don't* like that in the rubric.
" I warn them about the issue of idea convergence, but it is difficult to combat that"
"want to maximize efficiency"
Thank you so much for sharing your experience.
We see the same thing in the business world. For example, a lot of managers use AI to write their team’s annual reports, which leads to generic, empty compliments that don’t mean much.
While this research is really interesting to read, I’m also hoping to find practical methods people can actually follow to avoid these problems.
Maybe I’m being too negative, so correct me if I’m wrong... but my takeaway so far is this, only the students (or teams) who know they need to do their own thinking will get better results. If someone’s just looking to tick boxes and get it done, there’s not much you can do to change that.
Hello Jing: My subject area is marketing and I am the Executive Director of the Future of Marketing Institute, a think tank at York University (Toronto). I sent you a LinkedIn invite. Let's connect there as it is easier to share thoughts and information.
This is a great discussion of important topic. One of the issues that I have with a lot of these studies is that they don’t actually talk about what kind of people are working with the AI.
If we understand that generative AI will take what we give it, elaborate on it, expand it, then give it back to us in a different form, then the source of the inputs becomes critically important. In other words, it’s not just people who are using AI, it is different types of people. And studying their individual styles and approaches will tell us much, much more about why people got the results they did, then monolithically looking at AI as “a thing“.
So much of what happens in these interactions with AI is highly relational, and we need to look at the dynamics of the interactions, versus just the fact of them.
When we start adding nuance and detail to our research, it’s going to benefit us every bit as much as adding nuance and detail to our interactions with AI.
But until then, I really have to take these studies with a grain of salt. They’re only telling a portion of the story, and it’s no basis for any definitive understandings.
Great article—I am studying many of these questions as part of my research thesis in an MFA program in creative writing. I’m excited to share the results soon.
In the meantime, here’s another “reason” why using AI as a thought partner might not be a good idea - I think it aligns
I find that discussions of AI are rife with forced metrics that I cannot imagine why they matter to me or anyone for that matter. Competitive creativity? What is the real world application of that? Most of these metrics are only useful for bragging rights. To me creativity is when you are able to solve a real problem that you have not seen before. To me, that is the creativity that matters. So what if someone else has already solved that problem with that method. You may have noticed, there are billions of humans and that is statistically extremely likely anyway with or without AI. And if you are truely concerned about the true creativity that advances society, those ideas almost always comes when the creative person decides on a problem themselves. Not when people are asked to combine a tennis racket with an alarm clock.
Totally understand your point.
And I like the idea that creativity is a way to solve problems.
This research is mainly about highlighting the inherent problems of this language model, rather than trying to define creativity.
There are always flaws that anyone can spot in studies like these, and there are always more factors that could be included.
The goal of these science work is to spark new ideas and raise awareness, rather than settle the debate about what creativity really is.
Thank you for sharing your thoughts.
I’d love a copy of the above ChatGPT study.
Which one do you have in mind? There should be a link under each study I analysed, happy to share again if you can't find them.
Thanks Jing. I am a Professor and lecture on whether students should 'outsource' their creativity to an LLM. Your articles highlight the fact that while the use of ChatGPT, or other models, makes students think they are more creative, the end result is often generic/similar to other students. The results: many students in my business class come up with the same concepts for new products. I warn them about the issue of idea convergence, but it is difficult to combat that. Students want to maximize efficiency in getting assignments done, and they *think* their LLM-generated ideas are unique! I warn them about that phenomenon -- but the temptation to outsource is too high (*Sigh*). I now have a grading criteria that if their ideas are similar to others they lose marks. They definitely *don't* like that in the rubric.
" I warn them about the issue of idea convergence, but it is difficult to combat that"
"want to maximize efficiency"
Thank you so much for sharing your experience.
We see the same thing in the business world. For example, a lot of managers use AI to write their team’s annual reports, which leads to generic, empty compliments that don’t mean much.
While this research is really interesting to read, I’m also hoping to find practical methods people can actually follow to avoid these problems.
Maybe I’m being too negative, so correct me if I’m wrong... but my takeaway so far is this, only the students (or teams) who know they need to do their own thinking will get better results. If someone’s just looking to tick boxes and get it done, there’s not much you can do to change that.
Curious about your subject and research.
Hello Jing: My subject area is marketing and I am the Executive Director of the Future of Marketing Institute, a think tank at York University (Toronto). I sent you a LinkedIn invite. Let's connect there as it is easier to share thoughts and information.
Thank you, David :) I think I found you on LinkedIn. Yes, let's connect and speak there.
Very informative. Thanks for posting.
No problem! It's rare for people to want to understand AI this deeply, not to mention read the entire post, so great work to you too!
This is a great discussion of important topic. One of the issues that I have with a lot of these studies is that they don’t actually talk about what kind of people are working with the AI.
If we understand that generative AI will take what we give it, elaborate on it, expand it, then give it back to us in a different form, then the source of the inputs becomes critically important. In other words, it’s not just people who are using AI, it is different types of people. And studying their individual styles and approaches will tell us much, much more about why people got the results they did, then monolithically looking at AI as “a thing“.
So much of what happens in these interactions with AI is highly relational, and we need to look at the dynamics of the interactions, versus just the fact of them.
When we start adding nuance and detail to our research, it’s going to benefit us every bit as much as adding nuance and detail to our interactions with AI.
But until then, I really have to take these studies with a grain of salt. They’re only telling a portion of the story, and it’s no basis for any definitive understandings.
Great article—I am studying many of these questions as part of my research thesis in an MFA program in creative writing. I’m excited to share the results soon.
In the meantime, here’s another “reason” why using AI as a thought partner might not be a good idea - I think it aligns
https://open.substack.com/pub/mikekentz/p/from-thinking-partner-to-sparring?r=elugn&utm_medium=ios
Thanks for sharing your article. Please share the questions you ask in your thesis; it would be interesting to see what you've discovered.