Current rules on this forum are that we are not allowed to post material created by
artificial intelligence. Rule B10 of the community ethos and forum rules
explicitly mentions that post created by "chatbots" will be deleted.
So far I have supported this, and probably even flagged material which does not belong here.
However Large Language Models (LLM) such as ChatGBT or Gemini have recently become an increasingly common tool with many applications. While we can continue to not allow articficial intelligence in this the forum,
these tools are now becoming wide spread and are here to stay.
For illustration, teaching at Universities and schools is currently undergoing a drastic change.
They have realised that any homework submission has likely had at least input from an LLM
Thus they need to adapt their methods of assessment.
Recall that when Google and Wikipedia made access to information much easier about 25 years ago.
We first cautioned that the results could be incorrect and they often were. But with time they got better
and people started using Google a lot. With proper referencing of any information used, this works well.
Over the last two years, LLMs have arrived at a similar stage and they have become useful.
I have a couple of use cases in mind.
1) Many people now use LLMs in at their work place to generate summaries of papers, news articles.
We should consider doing the same as an increasing number of forum members will be using ChatGBT or equivalent.
If moderators permit, I would like to show an example of a lay summary of a paper, which regularly gets mentioned in the forum.
I find the result useful. In addition, I hope that such lay summaries would be beneficial for less scientifically minded people.
2) I have started to confront some of the diet choices that I make using ChatGBT and Gemini.
I find the answers informative.
If I make it clear upfront that I will be posting information generated by an LLM, I don't really see a problem anymore.
I would still be hesitant answering other member's questions using LLMs/
What we cannot and should never do is use ChatGBT to give advice to others, but this is already against the rules
What do the moderators (
@KennyA,
@Melgar,
@EllieM, ...) think? I am happy to discuss via pm or video meeting if you deem it necessary.