Have you had any chance to discuss an official stance on AI in deltachat/chatmail projects ? Its introduction isn’t neutral and has many potential consequences, which should probably be explicit.
Many resources point to the issues inherent to AIs of many kind. One such resource lists mostly plagiarism aspects that any open source project will necessarily be subject of:
On a more larger scale, AI has environmental and social impacts that cannot be put under the rug, such as the exploitation of people in third-world countries all for the benefit of people mostly in the first-world countries
As someone who is directly linked with this structural exploitation on the receiving end, it is important to me to keep in mind that this is not just an ethical concern that can be simply pushed aside: AI doesn’t work without exploitation, pure and simple. It is at the center of the LLMs, it is the explicit goal of all the entities building AI. Most of our lives are already based on exploitation, which means that acting on it starts with the easiest step: don’t add yet another exploitation on top of the pile.
I am obviously biased in this discussion and I would love if the Deltachat project would firmly reject any use of AI whatsoever, I would really love if there was, at least, a written document specifying the stance of the Deltachat project on the matter. In this highly moving ecosystem, it is important to know where we should our focus on, and what we should leave behind.