109

I work in computer graphics for a small company on our own game engine. We also have an in house team of artists creating content for this engine.

I am often tasked with taking suggestions from the artists and implementing them in the engine. The artists have no technical expertise, so I meet with them to understand their ideas and needs, and do so again to explain the functionality, limitations, and so on of my solution.

Recently it happened quite a few times that after such a meeting (where I explained that their requirements aren't 100% achievable and provided an alternative working solution) I get a message from the artists, along with a screenshot of an ostensible but frankly ludicrous solution proposed by ChatGPT. They then ask why I could not do what ChatGPT suggests.

I then have to take the time to explain why ChatGPT's proposed solution wouldn't work, which is tedious and difficult when the other persons do not understand many of the basic ideas involved. They also seem skeptical, and I get the idea they feel I'm incompetent because as I understand it ChatGPT is very useful in their setting, and they have come to believe it to be the ultimate source of knowledge.

How can I, without being condescending to either my coworkers or their use of ChatGPT, ask them to not make suggestions to me that they personally don't understand and are based solely on ChatGPT?

user140244
  • 779
  • 2
  • 2
  • 6
  • 2
    Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on [meta], or in [chat]. Comments continuing discussion may be removed. – Kilisi Jan 10 '24 at 18:40
  • 19
    I don't suppose you could ask ChatGPT to make some art that you can implement, and then send that art to the artists as an example of how they could make more implementable art? Or alternatively, just tell them that they shouldn't try to use ChatGPT to do your job for that same reasons that you shouldn't try to use ChatGPT to do their job (because it sucks at both). – RBarryYoung Jan 10 '24 at 18:48

13 Answers13

67

Thank for your clarifying in the comments, and to bring them up here:

I take the time to understand and consider each suggestion, not rejecting anything out of hand, and share them with my team members (of which there are 17). Their advice has been, thus far, to "just ignore". However, I would not like to treat my co-workers that way.

Sounds like you need to break that pattern, but also you don't necessarily want to be the hammer that drops in order to break it - as that rarely gets received in a friendly fashion, especially when they are used to just getting you to answer and consider the ideas.

The solution in that case is quite simple - reach out to your boss/manager, explain that while you appreciate the suggestions, they are not helpful (have example or two ready here), they drain a lot of your time and focus (example or two) but also that you do not wish to be rude to your well-meaning co-workers (keep your face straight saying that) and ask them to step in and be the hammer to curtail the flow of unhelpful and uneducated advice. Focus on the points you can easily prove - that the advice is wrong and it's a time drain - don't go into motives or anything else.

Ultimately that's the best way, as you are not in the place to tell them what to do, just as they cannot really tell you how to resolve an issue, and trying to pull some imaginary rank would likely alienate relationships further, and speaking in friendly way for them to see error of their way was dismissed. So time to call the actual powers to be, and let them manage as they are hired to do.

If the drain then continues, raise it with your boss/manager again.

Small addendum because it got almost as many upvotes as the answer somehow

Only do this though if you’re certain your manager won’t side with the artists.

If you have a boss that will not act on well documented time waste (even if its coming from a good place) and instead will drive you towards the time waste, that's a great thing to find out as soon as possible, and a whole other problem to rectify. But at least you will be aware that the problem exists now, and can address it.

Aida Paul
  • 35,116
  • 15
  • 91
  • 128
  • 6
    Only do this though if you’re certain your manager won’t side with the artists. – bob Jan 10 '24 at 02:46
  • 42
    @bob Why? If you have a boss that will not act on well documented time waste (even if its coming from a good place) and instead will drive you towards the time waste, that's a great thing to find out as soon as possible, and a whole other problem to rectify. But at least you will be aware that the problem exists now, and can address it. – Aida Paul Jan 10 '24 at 09:03
  • 39
    @bob if that happens, then this is a great thing. It means that you are expected to invest time in educating co-workers on technical topics. You can then thank your boss for bringing it to your attention and tell them that going forward you'd be investing X hours per day/week/whatever to fulfill that expectation instead of working on implementing stuff in the engine. Start preparing those topics and set up regular meetings for education purposes. You can use some of the X hours to polish your CV and start sending it around. – VLAZ Jan 10 '24 at 09:10
  • 9
    @VLAZ "polish your CV and start sending it around" - or just accept you are getting paid now to waste time and own it... if you communicated that it is time wasted and you are still made to do it... Maybe you can build new expertise on the go? – Fildor Jan 10 '24 at 09:13
  • 3
    @Fildor possible. Depends if you're interested in this. The "can" in my last sentence was to show that it's an option, rather than an expectation. – VLAZ Jan 10 '24 at 09:15
  • 6
    It might even be worth mentioning that ChatGPT answers have been banned from the programming stack exchange for similar reasons. – Max Hay Jan 11 '24 at 14:26
  • Perhaps use ChatGTP to generate a response to their ChatGPT suggestions? – jsf80238 Jan 11 '24 at 15:50
  • 1
    @Fildor Doing something that is a waste of time is a trap, even if you are being paid for it. You probably want to be paid more in the future, and what you are doing is what you get better at, and being good at useful stuff is good leverage to getting paid more. Spending your work time getting better at something that is a waste of time hurts your future self's skills; in comparison, if you spent it on something of value, you'd still get paid and you'd gain new skills that would be useful in the future. – Yakk Jan 11 '24 at 21:50
  • 3
    @bob This is how managers work: you tell your manager about a problem, the manager says: "please do what I tell you to do", and you have to do the stupid thing, BUT when that stupid thing does not work, maybe for a different reason (stupid things are usually stupid in more than just one aspect), the manager DOES take into account your considerations and puts forward a completely different approach. – 18446744073709551615 Jan 12 '24 at 00:07
  • If you can't fight them, join them -- start giving them ChatGPT-generated responses (don't hide it, either). See how long it takes for message to get across ;) – A.S Jan 12 '24 at 13:51
  • 1
    This is not bad advise. But I feel this is advise for a workplace with a very strict hierarchy. Whereas in many other workplaces it would be expected to resolve these kind of conflicts directly between employees. – seg Jan 12 '24 at 18:06
30

TL;DR - If you need this to stop, you need to have proof that the suggestions given to you are of low quality or outright wrong / infeasible. You do not need to differentiate yourself between a suggestion generated by a human or a machine. When someone presents a solution, and you have genuine reasons to turn them down, feel free.

Treat this as, they are not learning from their past interactions with you, and keep providing solutions which are not feasible / plausible in your setting / context. Whether the ideas or suggestions are generated by themselves or via prompt engineering, is not really relevant in this case. As needed, document these interactions, and present to your supervisor / manager, as how you are presented with below-par suggestions and you are investing time to prove the suggestions wrong, time and again.

Eventually when they see a "copy-pasted" content is not going to cut it, they'll put real efforts to make the content worthy.


Edit after the comments:

Yes, at the beginning, it is an uphill battle, as the suggestions can be generated using ChatGPT, (or a simple web search, without context and having proper research and adaptation done) in relatively very less time, but it takes much longer to disprove them. However, this is still necessary for below reasons:

  • It creates a document trail of your efforts. This does not only show that you are putting effort to filter out the suggestions, it also shows that you are providing explanations and feedback on why or how the suggestions are unfit for the purpose, and you're trying to help your co-workers to improve on themselves.
  • It shows that you are actually evaluating the solutions, not their source. People cannot cry foul that their suggestions are being ignored because they are "smarter" and generated suggestions by "clever" or "efficient" ways (whatever that means).

On a lighter note: Use ChatGPT to come up with the reasoning why the ChatGPT provided answers cannot be used.

Sourav Ghosh
  • 72,905
  • 46
  • 246
  • 303
  • 45
    The problem with this answer is that it's low effort to get the ChatGPT suggestion and high effort to analyse it, determine it's unfeasible and then convince your unbelieving colleagues in it who give it too much credit. It feels extremely unfair, that they generate you more useless work with little effort, while not trusting your judgement, and the advise being "treat it as genuine suggestions" which they clearly are not. – Andrew Savinykh Jan 09 '24 at 23:21
  • 27
    This answer is falling afoul of the "one fool can ask more questions in a minute than twelve wise men can answer in an hour" proverb. However, I do believe this can be resolved by simply letting it happen, doing your due diligence for every single request, and then turning around to management and using this as proof that you need better staffing or an otherwise reduced workload if they want you to be able to help the artists in the way that they're asking to be helped. – Flater Jan 10 '24 at 00:38
  • 2
    @AndrewSavinykh Unfortunately, there is very little you can do with people if they believe they are being smart by suggesting something they do not fully understand. You need to invest some time proving them wrong, so that this can stop. Instead of resisting (without evidence or proof of the idea being infeasible or outright wrong), showing them why and how they are wrong is more fruitful in the long run. – Sourav Ghosh Jan 10 '24 at 02:12
  • Plus, you have the added advantage of document trail, to show that they are the one consistently feeding you with low-quality or even improper solutions, thereby wasting your time as well as theirs. This can be helpful in discussion for reviews, where it shows you are not only trying to save you time, but you are providing constructive feedback to improve them. Whether they heed to that, is up to hem and a different point altogether. – Sourav Ghosh Jan 10 '24 at 02:14
  • 25
    @Flater Interestingly, this "one fool/wise men" argument is exactly why AI content is banned from StackOverflow. I was originally against the ban -- content is content, after all! -- but can see the point, partly because of this question. It's not that it's all bad -- it's that it's often worse than it looks and hence unreliable unless vetted. The amount is overwhelming, so it's impossible to sift through it all and tell good from bad. – Peter - Reinstate Monica Jan 10 '24 at 02:23
  • @Flater valid point, added an explanation for the same. Though, I believe the argument is still valid, with or without ChatGPT (no offence to anyone involved). – Sourav Ghosh Jan 10 '24 at 02:27
  • 18
    "On a lighter note: Use ChatGPT to come up with the reasoning why the ChatGPT provided answers cannot be used." - Genious. *standing ovation – Fildor Jan 10 '24 at 08:40
  • 9
    "Use ChatGPT to come up with the reasoning why the ChatGPT provided answers cannot be used." — I think this is important enough to be an answer in its own right! – gidds Jan 10 '24 at 10:47
  • 3
    I would ask GPT for suggestions on artists own works. – Lucas Morin Jan 10 '24 at 10:47
  • 1
    "On a lighter note: Use ChatGPT to come up with the reasoning why the CHatGPT provided answers cannot be used." Fantastic. Put it in the beginning of your answer, because this is the answer ! – EarlGrey Jan 10 '24 at 12:23
  • 2
    @EarlGrey: The top answer on meta.stackoverflow Temporary policy: Generative AI (e.g., ChatGPT) is banned is a ChatGPT-generated response to why ChatGPT answers should be banned.... And a ChatGPT-generated response to why they should be allowed. Which also nicely demonstrates that it'll tell you what you want to hear. (Regardless of the truth for questions that aren't purely subjective). e.g. Does this ChatGPT "swap" snippet do anything? (no, it loads and stores both values back to their original locations) – Peter Cordes Jan 10 '24 at 16:26
  • 1
    I commented a couple years ago when ChatGPT was first becoming a problem for Stack Overflow with some specific examples of the total garbage it invents when trying to explain the reason for something. – Peter Cordes Jan 10 '24 at 16:30
  • 1
    Simple solution: create a safe area for experiments and let the Artists run the Asylum. – Happy Idiot Jan 11 '24 at 02:40
  • 1
    @PeterCordes that was 2 years ago? I feel like it was a few months... maybe I'm getting old. – mbrig Jan 11 '24 at 22:19
18

“Sorry, but ChatGPT does not have a deeper understanding of what it is talking about, which is very important in computer programs. It is at best a rough suggestion created by stitching together various snippets and require an experienced developer to debug and fix. This will most likely take longer and be of inferior quality to what we have already. Thanks anyway for your efforts “

I had ChatGPT tell me how to build OS/2 programs with GitHub actions. Everything looked fine except GitHub didn’t support it. Hopefully this will also improve - it is still impressive what we have now.

Thorbjørn Ravn Andersen
  • 5,697
  • 1
  • 19
  • 37
  • 39
    Remove "deeper" from "deeper understanding". – gnasher729 Jan 09 '24 at 15:50
  • 3
    @gnasher729 It softens the blow... – Thorbjørn Ravn Andersen Jan 10 '24 at 01:04
  • 1
    "Deeper" is a comparative word. Without reference to the thing to which the degree of understanding is being compared, many would write-off someone making such a statement as a person swinging around words they don t understand.(Deeper? Deeper than what? What on earth is s/he on about? :roll-eyes:) Substituting the word "deep" would not suffer from the same problem. – enhzflep Jan 11 '24 at 08:04
  • @enhzflep I remember hearing a business consultant saying, "That will be a deeper listening" every time we brought up a complex issue in initial consultation meetings. I found this vacuous phrase tiresome. – Happy Idiot Jan 11 '24 at 12:11
  • ChatGPT is very frequently outright wrong when it comes to anything that requires a grounding in actual facts. I have had it give me downright dangerous advice when I was testing it out. – TimothyAWiseman Jan 11 '24 at 17:39
  • 1
    @enhzflep i am not a native speaker. This is what I would have said in Danish. – Thorbjørn Ravn Andersen Jan 12 '24 at 00:15
  • I don't like the suggestion to say it will be inferior quality. You don't know that - it's your opinion, and likely a correct one if you're experienced in the field - but I'd prefer to stick to verifiable facts when possible. – user253751 Jan 12 '24 at 00:29
  • @user253751 actually here it is the opposite - it is the responsibility of the submitter to verify it is 100% correct. Programming is hard for a reason. – Thorbjørn Ravn Andersen Jan 12 '24 at 01:26
  • @ThorbjørnRavnAndersen Well, no. If you tell the artist you think the character's hair should be darker, you don't have to verify that making it darker would make the character look better to say that. – user253751 Jan 12 '24 at 02:09
  • @user253751 yes, but this is programming. – Thorbjørn Ravn Andersen Jan 12 '24 at 07:17
17

As a former game engineer, I note the OP's comment to the question:

I take the time to understand and consider each suggestion, not rejecting anything out of hand, and share them with my team members (of which there are 17). Their advice has been, thus far, to "just ignore". However, I would not like to treat my co-workers that way.

If you have a team of 17 all of whom are telling you to "just ignore" it, then I would recommend that you follow the team's consensus. If you must make it a bit more diplomatic, then you could briefly reply, "Our team policy is to not take outside technical suggestions", and/or "If you think this needs escalating, you can address it with our manager".

Edit the exact phrasing to your situation or taste, of course, but I would keep it curt to avoid inviting further interaction on the issue. That is, the further you stray from "just ignore", the worse you're making it for your team. In particular, addressing the ChatGPT-ness invites debate/subterfuge about whether it was really ChatGPT or something else.

Daniel R. Collins
  • 4,635
  • 1
  • 14
  • 27
  • 15
    Our team policy is to not take outside technical suggestions - Is it really a good idea to say that? The problem is that they're wrong because they're from ChatGPT trying to do things that aren't possible (which is presumably why the OP had to decline the feature request in the first place). If you had good outside suggestions (like a good human-written Stack Overflow answer), a sensible person would welcome it. – Peter Cordes Jan 10 '24 at 07:57
  • 12
    Phrasing it this way would make the OP seem uncooperative for no reason, with a bad case of NIH syndrome. IMO you should tell them that ChatGPT spews garbage for impossible tasks, so the fact that ChatGPT comes up with code is basically zero evidence. And that the previous ChatGPT suggestions have all been a waste of time. If explaining that doesn't work, then yeah, "you can address it with our manager" is appropriate, but giving a wrong or misleading reason like "no outside technical suggestions" is just going to cause resentment. – Peter Cordes Jan 10 '24 at 08:01
  • 6
    In Germany, we have the concept of "Ablage 'P'", where 'P' stands for "Papierkorb" (garbage bin). And it means basically: You thank for the valueable suggestion and as soon as the guy turns to leave it is put into ... you guess it... – Fildor Jan 10 '24 at 08:52
  • 1
    "If you think this needs escalating, you can address it with our manager" gives the "grumpy nerd" - vibe, don't you think? – Fildor Jan 10 '24 at 09:30
  • 6
    @Fildor In the UK we call it the circular file, or the "B1N". – Sod Almighty Jan 10 '24 at 16:24
12

Demonstrate it in their own domain

If all else fails, how hard would it be to take one of the artists’ tools that is scriptable and generate scripts for it with ChatGPT? E.g. Blender is scriptable if they use Blender. My guess is many of their tools will be scriptable. Then you could as a teaching moment (maybe as part of a seminar), generate some scripts to create art assets or do something else with the art tool and demonstrate how while the AI gets close and can be helpful for simple tasks, for complex tasks the amount of human intervention is more trouble than it’s worth. This might take a good bit of work on your part finding out which tools are suitably scriptable and coming up with use cases that are of comparable complexity to the code the artists are giving you, and verifying that the AI generated scripts are wrong in a way that the artists will understand, but it could make the case pretty strongly. Obviously don’t do this on the fly—do your homework, verify that the AI really does a consistently and convincingly bad job before hand and run canned examples in the actual seminar; don’t come up with ones on the fly.

The artists may already be using AI but I’m guessing they’re not using it to generate scripts to fully-automate their tools since that’s effectively code and ChatGPT still has a long long way to go to be a competent coder. So if you can show them it’s bad at generating code in their domain, it may click.

bob
  • 6,993
  • 1
  • 17
  • 43
  • 12
    Asking ChatGPT how to do something that's impossible or explain something that's false seems to be a very good way to get it to generate nonsense, like inventing premises for its arguments. Explaining that to the artists might help them understand why their ChatGPT results aren't actually useful, even though it claims to have answered the question. Coming up with a plausible but actually impossible task in the artist's problem domain or scripting language would be a great way to drive home this point, assuming ChatGPT does what I expect and makes crap. – Peter Cordes Jan 10 '24 at 07:52
  • 4
    I had the similar issue but with my own expert field. Suggestions from ChatGPT were less than useful. Thanks ChatGPT, for your suggestion that to get the data I need I just need to call non-existent function that never existed in that or any other library and cast it to non-existent data type. In fact, I'd be ecstatic if it existed or if it was that easy. Alas, it doesn't and is not. – jo1storm Jan 10 '24 at 13:02
  • 5
    I don't have a ChatGPT account so I can't verify these suggestions, but I'm guessing you could get solidly plausible sounding but actually nonsense answers for "how do you unbake clay" or "how do I get acrylic to stay wet forever like oil paint". That might be more universal for artists to understand than specific scripting things which sometimes people treat as magic anyway. – user3067860 Jan 10 '24 at 16:10
  • 7
    Or you can find similar things that don't require expert knowledge in any field, I bet asking ChatGPT to plan the itinerary for a trip to Narnia is fun. Or ask an AI meal planner to suggest recipes using mosquito repellent: https://amp.theguardian.com/world/2023/aug/10/pak-n-save-savey-meal-bot-ai-app-malfunction-recipes – user3067860 Jan 10 '24 at 16:22
  • @user3067860 "Unfortunately, it's not possible to unbake polymer clay once it has been cured in an oven. " and "Acrylic paint is water-based and dries relatively quickly compared to oil paints, which are oil-based and have a slower drying time. While it's not possible to make acrylic paint stay wet forever like oil paint..." – oldtechaa Jan 11 '24 at 01:39
  • Unfortunately it's actually accurate on that one, not so much on software development. – oldtechaa Jan 11 '24 at 01:39
  • 3
    If only art was as complex as coding... sigh – Happy Idiot Jan 11 '24 at 02:43
  • 2
    Yeah. I think the key is showing how poorly it does at something that’s essentially code—something that requires logic and problem solving to generate precise step by step instructions / and algorithm. It really struggles at that for all but toy problems, I suspect because pattern matching != logic. – bob Jan 11 '24 at 02:54
  • 1
    Immediately after ChatGPT provides a dubious answer, the next thing you do is tell it you doubt the accuracy of that answer. Odds are it will cheerfully provide a new, different, contradictory answer. You can then repeat this process several times, getting various iterations of bullshit (euphemistically called 'hallucinations' by AI apologists.) – barbecue Jan 11 '24 at 16:05
  • 2
    @HappyIdiot Meanwhile the artists wish art was as simple as (they think) coding is (after all, you can just ask ChatGPT to write the code for you!). Almost everything looks easy from the outside. Don't dismiss art talent any more than you'd like your coding talent to be dismissed. – user253751 Jan 12 '24 at 00:31
  • @user253751 I'm pretty good at painting and drawing. I thought I was being the opposite of dismissive? – Happy Idiot Jan 12 '24 at 00:50
10

How can I, without being condescending to either my coworkers or their use of ChatGPT, ask them to not make suggestions to me that they personally don't understand and are based solely on ChatGPT?

Don't ask your coworkers.

There is a direct conflict in what you want from your coworkers (to not make suggestions that they don't fully understand) and what you have been tasked to do (to take suggestions from artists without any technical expertise). You need to speak to your boss and clearly explain this conflict and follow their guidance on how to proceed.

Peter Mortensen
  • 1,003
  • 1
  • 8
  • 8
sf02
  • 78,950
  • 39
  • 179
  • 252
  • 5
    Maybe the coworkers just misunderstand how they are supposed to make suggestions (or the nature thereof)? I'd imagine they are supposed to make artistic suggestions (like "this color should be much darker...") but not technical ones? "What" vs "How" ? – Fildor Jan 10 '24 at 08:45
  • 1
    Maybe the coworkers could be replaced with ChatGPT? – Happy Idiot Jan 11 '24 at 02:46
4

Document three or four simple real examples: what the artist wanted, the limitations you brought up, what ChatGPT suggested and why it doesn't work, in as friendly layman terms as possible. Have it up in a single link. Hopefully you have the explanations written up already from trying to explain the first ones directly to the artists before you realized they'd just keep coming. But improve them if possible.

Then forever reply with "ChatGPT doesn't really work in our domain, here are four examples. Breaking it down is time consuming and I won't do it again, it just misunderstands everything all over. "

3

There is an answer (with quite a few up votes) that says

If you need this to stop, you need to have proof that the suggestions given to you are of low quality or outright wrong / infeasible.

This is rather wrong. Your boss expects you to use your judgement about which technologies to use, which to investigate, and how much time to spend investigating different technologies. Go to your boss, present your view on the likely effectiveness of ChatGPT suggestions, and ask your boss how much time they want you to spend understanding them and explaining their problems to other employees. If your boss wants you to waste your time on this, tell them you will, and tell them you expect nothing useful to come from it.

JonathanZ
  • 187
  • 4
3

It sounds as if the problem here is that it takes you time to write a detailed response after you've analysed the suggestion. You'll still have to do that analysis, but producing a plausible explanation for why it won't work sounds like a job that GPT would actually be good at.

Ask ChatGPT for a non-technical explanation of why the proposal won't work.

This way you're not discouraging people making suggestions, you're still considering the actual suggestion, and you've satisfied them that the technical solution proposed by GPT isn't valid, without wasting much time.

You could even add something like "I asked ChatGPT to explain why this won't work" to protect yourself from the inevitable errors. You know it means "this is probably wrong" but they'll probably read it as "this authority agrees with me".

Robin Bennett
  • 10,560
  • 2
  • 27
  • 36
  • Right, ChatGPT might tell you it won't work when it actually would - Aargh! – Happy Idiot Jan 12 '24 at 00:37
  • 1
    @HappyIdiot - I'm not suggesting using ChatGPT to do the analysis, just to generate a plausible explanation. Use the human expert where they're needed and use a computer for the bit it can do quickly. – Robin Bennett Jan 12 '24 at 15:50
  • Right. Autopilot can fly and even land a big plane, but it can't decide where to go! :-) – Happy Idiot Jan 12 '24 at 16:08
2

You should talk to your boss about the ChatGPT fans.

There is a high chance your amazed-by-ChatGPT colleagues will use it to do part of their work. This implies possible business-critical data leaks, as well as potential copyright infringement.

keshlam
  • 66,609
  • 15
  • 121
  • 227
user882813
  • 129
  • 1
  • 4
    Welcome, but this comment doesn't actually make sense. Could you make it clearer please? – oldtechaa Jan 11 '24 at 01:42
  • 1
    @oldtechaa: ever heard of irony? :-) – Dominique Jan 11 '24 at 08:59
  • 1
    @Dominique my criticism was not in regards to the substance of the answer. It was in regards to the poor grammar that made it unclear what the original intended meaning actually was. – oldtechaa Jan 11 '24 at 09:35
0

Possible approach: Redirect.

"Hey, if you want to use the toy to come up with artistic ideas and render them as examples of what you would ideally like to see in the game, rather than drawing illustrations yourself, that's your call, and I trust you to throw out the proposals that are artistically unreasonable; that's part of your job. But deciding what can be practically implemented or not is my job, and you need to trust my judgement, not try to appeal it to a machine that basically tells you its dreams. I will take your artistic input seriously and try to suggest ways we can get something similar to what you want without breaking the budget, as has always been the practice in this is industry. But many ideas just aren't practical with this technology or in this release. Back off and let me do my job, even if it is sometimes to tell you we can't (or can't afford to) do that."

That could benefit from some wordsmithing, but it's the key point you need them to understand. They know the machine makes stuff up in their domain. It does the same in yours. The difference is that your output isn't finished when there's a guess at how it might work, IF it happened to work that way.

keshlam
  • 66,609
  • 15
  • 121
  • 227
0

I think you should write a standard reply for these kind of issues. This should be understanding and not dismissive of ChatGPT as a tool or make them look like idiots for using ChatGPT. If you don't come across as being against AI or condescending then they're far more willing to listen and to not take it personal. Your question already is very understanding and reflective so you seem to be equipped with the necessary social skills to do so.

You then should inform them about the current limitations of AI especially overconfidence. It would also be good if you could include an example for ChatGPT generating something obviously ludicrous in your email, something they can understand. Then just send them this reply every time they send you a stupid suggestion obviously generated by ChatGPT.

seg
  • 1,421
  • 1
  • 5
  • 11
-5

Get a ChatGPT subscription and, with the permission of the company, feed the codebase documentation into it. Create a company-specific version of ChatGPT that might generate better answers.

Get the artists to use that version of ChatGPT to form their suggestions.

Peter K.
  • 107
  • 2
  • 7
  • 1
    I don't see why this is downvoted. It seems like a direct response - I was going to suggest creating a 'sandbox' implementation where new ideas could easily be tried. Lower the barrier, just don't get hit by the falling bricks. – Happy Idiot Jan 11 '24 at 02:37
  • 3
    @HappyIdiot Problem is with GenAI itself. Even if you train it with your own, highly specialized data, it will still suffer from the exact same problems like hallucination and being overly confident about its own crap. A tuned version of ChatGPT can only alleviate, but will not solve the ultimate problem, which is wasting disproportionally more human effort to disprove its nonsense than to generate it. – iBug Jan 11 '24 at 10:24
  • @iBug ok, I was just suggesting to make new ideas easily tryable, hopefully by the suggesters themselves? Maybe it's too much effort to be useful in this case. – Happy Idiot Jan 11 '24 at 12:15
  • I think the trick there is how would you enable the suggesters to recognize when the code produced by ChatGPT wouldn’t work (incorrect, doesn’t fit into the codebase, etc.) – bob Jan 11 '24 at 16:22
  • @bob we keep hearing about how "everyone should learn to code". We just need some rules around what actually gets used. Heck, I learned how to paint and draw. – Happy Idiot Jan 12 '24 at 00:46
  • 1
    HappyIdiot I actually disagree that everyone should learn to code because it moves them out of “coding is hard” right into the Dunning-Kruger Valley of Self-Deception where they’re more likely to say both “coding is easy” and “I’m good at coding” when for them neither is true. Plus we now have tons of blogs (and even books) giving terrible advice about coding because “everyone should code” and Dunning Kruger. It makes it harder for professional coders to find correct info. I wish people would leave coding to those who like it for what it is enough to pursue it as hobby or career. – bob Jan 12 '24 at 05:49
  • I realize I sound like a cranky old man, but the law of unintended consequences is real here. – bob Jan 12 '24 at 05:50
  • 1
    @bob moreover, people with initial coding knowledge are often vastly underqualified, yet overconfident, when it comes to discussing non-trivial code bases. "It's just a method you can call twice, right?" when it comes to a distributed system with 40+ microservices is not really an accurate question to ask at that point, when it comes to avoiding duplication of some business logic. In some cases that would require creating a new service to deal with just that one thing. Alternatively, an internal library. As well as refactoring. But they know what a loop is, how much harder "real coding" is? – VLAZ Jan 12 '24 at 07:33