Have you had any chance to discuss an official stance on AI in deltachat/chatmail projects ? Its introduction isn’t neutral and has many potential consequences, which should probably be explicit.
Many resources point to the issues inherent to AIs of many kind. One such resource lists mostly plagiarism aspects that any open source project will necessarily be subject of:
On a more larger scale, AI has environmental and social impacts that cannot be put under the rug, such as the exploitation of people in third-world countries all for the benefit of people mostly in the first-world countries
As someone who is directly linked with this structural exploitation on the receiving end, it is important to me to keep in mind that this is not just an ethical concern that can be simply pushed aside: AI doesn’t work without exploitation, pure and simple. It is at the center of the LLMs, it is the explicit goal of all the entities building AI. Most of our lives are already based on exploitation, which means that acting on it starts with the easiest step: don’t add yet another exploitation on top of the pile.
I am obviously biased in this discussion and I would love if the Deltachat project would firmly reject any use of AI whatsoever, I would really love if there was, at least, a written document specifying the stance of the Deltachat project on the matter. In this highly moving ecosystem, it is important to know where we should our focus on, and what we should leave behind.
As someone who has been using Deltachat since last year and has overall positive impressions on the software and how it works, I am very concerned that Deltachat may be okay with GenAI contributions.
This is technology that sucks up the creative output of humanity and then soullessly replicates it, in essence stripping away the humanity from everything. It’s completely against my ideals of creating a more human world, and, honestly, offensive that it even exists.
It is also important to stand in solidarity with other affected professions, such as writers, artists, journalists, teachers and many others. Pretty much everyone, except some privileged individuals, are affected negatively by this technology now.
GenAI is also pushed by tech companies and billionaires to try and devalue all labor, which will have the effect of making the gap between poor and rich even bigger.
I feel the only ethically defensible position is to have a full ban on all GenAI contributions, not only with code, but also everything else.
A big question that often pops up in discussions about banning GenAI content is enforcement. I feel like it is unproductive to go on witch hunts and overly scrutinize contributions for containing GenAI output or not, it is best to assume innocent until proven guilty. GenAI contributions might slip in, and once we are aware of that, we do a best effort in reverting them, if possible. Best to be pragmatic here.
The most important thing is to have the Deltachat project join us all in standing against this abusive technology.
No, AI isn’t “just a tool”. Nothing is ever “just a tool”. It is precisely this lack of understanding that has led us to where we are today. Whatsapp could also be “just a tool”. Meta is just a company that builds “just tools”. There are good reasons why this is wrong, and we must keep this reasoning with AI
It’s open source code. You can inspect every single commit.
People overlook how tedious front-end coding is. 90% of it is scaffolding. You’d be silly to reject AI for that use case. The trick is to use it in really small ways. Like the scaffolding stuff. If it starts making entire components wholesale, the project really just “gets away” from you and you lose control of the mental model of the thing in your head (which is really bad in the long run).
OpenAI will be irrelevant in 10 years. Models are getting smaller and more efficient. The future is local compute.
No doubt that labor is changing though. But you can’t put the toothpaste back into the tube.
The scaffolding could be done via traditional code generation, without any GenAI involvement, or using programming languages that don’t need as much scaffolding. Even if that weren’t true, though, we shouldn’t be accepting something as deeply unethical as GenAI. It’s up there with proprietary software, I would say. One could argue about some benefits of proprietary software, but that doesn’t mean proprietary software should be accepted. It should not.
Local models fix the issue of billionaires being completely in control to some extent, but the tech itself is still very unethical.
I don’t like the defeatist rhetoric of “you can’t put the toothpaste back in the tube”. If something is unethical, it should be fought against. You can’t put the anti-GenAI movement back into the tube either.
Yes I agree my perspective is that AI has material effects on labor, power, and our very finite resources so I would like for the project to have an explicit statement on that as well.
Don’t want to step in here too deeply and mostly agree with @lumi but also think that for skilled (!) and aware (!) people “AI” could indeed be some sort of helpful tool (best with a local model to also see its impact to the power bill). For those people I mentioned before an “AI ban” could be like telling them to stop using a keyboard for typing but use a touchscreen instead. Not sure if that’s how it works and if they would do that if they (you know, the skilled and aware ones) see a real (!) benefit from “AI” usage.
I try to make another example: Globalisation
I think globalisation brought many advantages to civilisation (we can buy bananas all year long and have low duties between countries). But also many disadvantages (everybody experiences this right now due to high fuel/energy prices).
So is globalisation “bad” or is it “good”? I think there’s no “right” answer to this. That’s why I think there’s also no “right” answer to “AI” being “good” or “bad”.
It’s always the people who use the technology to either harm people or help developing global wellfare. In my opinion “AI bros” definitely harm people just for profit. Local “AI” models on the other hand can (and do) improve our lives. “AI” already was there before the hype started. It was mostly called an algorithm back then.
This was human-directed, not autonomous code generation. I decided what to port, in what order, and what the Rust code should look like. It was hundreds of small prompts, steering the agents where things needed to go. After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.
The extensive testing against the old codebase shows that the developers were using “AI” as a tool knowing about its problems and caveats and focussing on a good instead of a quick-and-dirty result.
As the "AI” question is highly philosophical my five cents for the Delta Chat project would be to build awareness amongst developers and maybe have guidelines to reduce the usage of such tools to where they are really good at and where they can help (e.g. with scaffolding or similar tasks). Transparency also helps with this (like the Ladybird dev did with his post)
It is actually very easy to say that globalisation is bad, unfortunately you don’t seem to understand what globalisation is. It’s not about having bananas available everywhere: bananas have been exchanged for multiple millenia already. This is just exchange of goods, monetised or not. Globalisation is something very specific and very different: overriding national rules for private interest. People in Europe ask for too much money, don’t want to work for me, go on holiday ? Instead I’ll exploit people from the global south, where working conditions are worse, wages are 10 times lower, and I can force people to work for me by taking their passport and not allowing them free movement. Globalisation is when TotalEnergies, the third biggest oil enterprise, makes dozens of billions of dollars in profit and doesn’t pay taxes on much of that because it can play with the different laws. Everything is legal, just like hanging people because of their nationality is legal in many places but this is not the right question: does it make the society better ? There is no reason to believe so.
It’s very good that you talk about globalisation because AI has similar aspects. AI doesn’t work without not respecting laws. Corpus have been built on material illegally acquired and without the consent of most of the involved people. It is trained with no regard towards people’s protection, fairness, justice, equity; just private interest. It exploits entire continents and puts people under constant stress and miserable life, but these are places with poorer working conditions: white people wouldn’t stant a week working in these conditions, but because their black and brown, white people don’t care. The objective is clearly a domination not just of poor countries, but of everyone who isn’t at the helm of AI companies for the benefit of them. AI builders want a world where laws do not affect them anymore and are doing everything to get there.
How are local models built ? Who builds them ? Who funds them ? Those questions are always evaded as if they were solved but nothing is changed.
The core question is something you talked about: is the comfort of white people more important than the lives of non-white people, the preservation of a livable planet, the lowering of injustice. A better commit for you isn’t worth a life of trauma for someone else (read the links, especially the second one in the first post).
DeltaChat wants to get people away from BigTech, from centralization, give the power back to people; AI is the opposite of that. Unfortunately as with all things, using or talking positively about AI only legitimizes it as an acceptable part of life. Hence why it is important to be clear about it
Globalization seems to have some good sides too and isn’t just all imperialism and exploitation - not that I would be an expert anyway - but I get that your point is that it’s not as obviously more good than bad as people make it out to be, just like AI.
There is no official position on LLM tooling or LLM-assisted contributions but several discussions between various contributors, across a wide spectrum of opinions. What is pretty clear already, is that unlike with WhatsApp or Telegram or Firefox, Delta Chat keeps it like Signal: avoid any “AI” integration on the user side, on what runs in Apps.What is also pretty clear that random LLM-generated contributions are quickly closed. While the question of LLM usage might be the most important in FOSS hacker circles and among rightfully concerned ecosystem-citizens, our project efforts are dealing with supporting users and communities in real-life emergencies, who are otherwise blocked from private communication. Moreover, many contributors are currently busy co-ordinating important bug-fix follow up releases after the 2.48 monster release.
That being said, there will likely be public statements from the “deltachat” project during 2026 about LLM-usage, when we get to internally discuss it more. Which is maybe similar to how Debian concluded for now with “not deciding right now about LLM”: Debian decides not to decide on AI-generated contributions [LWN.net]
It’s good to know that DeltaChat will not have any GenAI features, but I feel this is not enough. Using GenAI in the creation of DeltaChat or the wider ChatMail ecosystem still tacitly endorses it.
This issue concerns way more people than just FOSS hacker circles, GenAI technology is fundamentally unethical in all its uses, including coding. I know many non-programmers who are against GenAI.
I would argue that the goal of supporting those users aligns with banning GenAI, as GenAI technology disproportionately affects vulnerable people. So banning it would directly support those goals.
It is fair to decide not to decide, though could there then at least be a ban until a decision has been reached? (this is the status quo anyways)
In my view, the DeltaChat community has many different perspectives on the ethical aspects of AI, and this reflects the fact that people in general and the open-source community has many different perspectives on this. I don’t think DeltaChat should pursue an AI ban for ethical reasons, for that reason.
The key thing I think that DeltaChat should do is adopt the policy that all use of LLMs must be disclosed in the commit message, with details of the model used and the harness used, if any. Also, I think DeltaChat should forbid the use of code written by agents without human review, and forbid agents from opening pull requests themselves.
This is not an argument. People using something isn’t a reason in itself to continue accepting it. That’s like saying “many people are using slaves, and that is a reason why there shouldn’t be a ban”. The direction is something that is decided, not simply accepting whatever exists.
I’m not using the slavery metaphor randomly. Both AI and slavery are rooted in the same mindset: domination for my personal comfort. It is deeply ingrained inside the westerner white culture and is as such very hard to understand, let alone fight, but it is there: the very fact that ethical concerns might be pushed aside means that the lives of everyone on this planet aren’t worth the same to you. And I, like many others, believe that nothing truly fair, truly just, and beautiful can grow on top of that. Hence why only a total ban of AI, just like a total pan of slavery, is the minimum
Thanks for the reply. I understand how this topic might create a lot of heated discussion, especially with so little time and priorities that feel higher on the side; that’s why I think progress here can be done with at least knowing the current position, so that everyone know where we are and what are the possibilities for evolution.
I welcome your word; I think it’s the best the community can have at this time, so to me the initial purpose of this topic is achieved. I guess we can keep arguing in the rest of this topic, or in other topics, until a more detailed position is reached
Unfortunately AI haters won’t be able to use any operating system anymore. Maybe Haiku? At least we will soon have peptides that will turn us into giga geniuses so we won’t need computers anymore /s
Keep fighting against it whenever we can, but accept that reality currently is not very good for us. Compromise a bit on the practical side; but never compromise in our ideals. This technology is extremely unethical and we must be loudly against it.
We must also band together with everyone affected, no matter what their profession is.
There are many parallels between LLM-generated code and proprietary software, and I feel like LLM outputs should be seen as almost as bad as proprietary. The way forward is to expand the movement and expand the amount of slop[0]-free software projects, and to make connections with writers, artists and other affected professions. So we can all get together and purge it from the world.
This is an interesting discussion. Isn’t generative AI more similar to Free / Libre tech? At the least, it is the software pirate’s dream. Robin Hood technology, in a way. If everyone gets their own LLM run locally, there won’t be much need for proprietary apps at all. We are already able to screenshot an app from the app store and tell an agent to copy the functionality as a skill.