Any official position on the use of AI?

Have you had any chance to discuss an official stance on AI in deltachat/chatmail projects ? Its introduction isn’t neutral and has many potential consequences, which should probably be explicit.

Many resources point to the issues inherent to AIs of many kind. One such resource lists mostly plagiarism aspects that any open source project will necessarily be subject of:

On a more larger scale, AI has environmental and social impacts that cannot be put under the rug, such as the exploitation of people in third-world countries all for the benefit of people mostly in the first-world countries

As someone who is directly linked with this structural exploitation on the receiving end, it is important to me to keep in mind that this is not just an ethical concern that can be simply pushed aside: AI doesn’t work without exploitation, pure and simple. It is at the center of the LLMs, it is the explicit goal of all the entities building AI. Most of our lives are already based on exploitation, which means that acting on it starts with the easiest step: don’t add yet another exploitation on top of the pile.

I am obviously biased in this discussion and I would love if the Deltachat project would firmly reject any use of AI whatsoever, I would really love if there was, at least, a written document specifying the stance of the Deltachat project on the matter. In this highly moving ecosystem, it is important to know where we should our focus on, and what we should leave behind.

6 Likes

As someone who has been using Deltachat since last year and has overall positive impressions on the software and how it works, I am very concerned that Deltachat may be okay with GenAI contributions.

There are many reasons why GenAI technology is horrible, you could have a look at https://ai-sucks-actually.fyi or small-hack/open-slopware: Free/Open Source Software tainted by LLM developers/developed by genAI boosters, along with alternatives. Fork of the repo by @gen-ai-transparency after its deletion. - Codeberg.org for more information. I’m going to focus on the reasons I think are more important, but other reasons for GenAI are equally as valid.

This is technology that sucks up the creative output of humanity and then soullessly replicates it, in essence stripping away the humanity from everything. It’s completely against my ideals of creating a more human world, and, honestly, offensive that it even exists.

It is also important to stand in solidarity with other affected professions, such as writers, artists, journalists, teachers and many others. Pretty much everyone, except some privileged individuals, are affected negatively by this technology now.

GenAI is also pushed by tech companies and billionaires to try and devalue all labor, which will have the effect of making the gap between poor and rich even bigger.

I feel the only ethically defensible position is to have a full ban on all GenAI contributions, not only with code, but also everything else.

A big question that often pops up in discussions about banning GenAI content is enforcement. I feel like it is unproductive to go on witch hunts and overly scrutinize contributions for containing GenAI output or not, it is best to assume innocent until proven guilty. GenAI contributions might slip in, and once we are aware of that, we do a best effort in reverting them, if possible. Best to be pragmatic here.

The most important thing is to have the Deltachat project join us all in standing against this abusive technology.

4 Likes

I don’t think DeltaChat team are strictly agains AI usage but it doesn’t mean that Delta Chat is/will because ai slop

Hey, GenAI is just a tool and you can benefit if you use it correctly

No, AI isn’t “just a tool”. Nothing is ever “just a tool”. It is precisely this lack of understanding that has led us to where we are today. Whatsapp could also be “just a tool”. Meta is just a company that builds “just tools”. There are good reasons why this is wrong, and we must keep this reasoning with AI

2 Likes

It’s open source code. You can inspect every single commit.

People overlook how tedious front-end coding is. 90% of it is scaffolding. You’d be silly to reject AI for that use case. The trick is to use it in really small ways. Like the scaffolding stuff. If it starts making entire components wholesale, the project really just “gets away” from you and you lose control of the mental model of the thing in your head (which is really bad in the long run).

OpenAI will be irrelevant in 10 years. Models are getting smaller and more efficient. The future is local compute.

No doubt that labor is changing though. But you can’t put the toothpaste back into the tube.

The scaffolding could be done via traditional code generation, without any GenAI involvement, or using programming languages that don’t need as much scaffolding. Even if that weren’t true, though, we shouldn’t be accepting something as deeply unethical as GenAI. It’s up there with proprietary software, I would say. One could argue about some benefits of proprietary software, but that doesn’t mean proprietary software should be accepted. It should not.

Local models fix the issue of billionaires being completely in control to some extent, but the tech itself is still very unethical.

I don’t like the defeatist rhetoric of “you can’t put the toothpaste back in the tube”. If something is unethical, it should be fought against. You can’t put the anti-GenAI movement back into the tube either.