Commons:Village pump/Proposals
This page is used for proposals relating to the operations, technical issues, and policies of Wikimedia Commons; it is distinguished from the main Village pump, which handles community-wide discussion of all kinds. The page may also be used to advertise significant discussions taking place elsewhere, such as on the talk page of a Commons policy. Recent sections with no replies for 30 days and sections tagged with {{Section resolved|1=--~~~~}} may be archived; for old discussions, see the archives; the latest archive is Commons:Village pump/Proposals/Archive/2025/02.
- One of Wikimedia Commons’ basic principles is: "Only free content is allowed." Please do not ask why unfree material is not allowed on Wikimedia Commons or suggest that allowing it would be a good thing.
- Have you read the FAQ?
![]() |
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 5 days and sections whose most recent comment is older than 30 days. |
RfC: Changes to the public domain license options in the Upload Wizard menu
[edit]![]() | An editor has requested comment from other editors for this discussion. If you have an opinion regarding this issue, feel free to comment below. |
Should any default options be added or removed from the menu in the Upload Wizard's step in which a user is asked to choose which license option applies to a work not under copyright? Sdkb talk 20:19, 19 December 2024 (UTC)
Background
[edit]The WMF has been (at least ostensibly) collaborating with us during its Upload Wizard improvements project. As part of this work, we have the opportunity to reexamine the step that occurs after a user uploads a work that they declare is someone else's work but not protected by copyright law. They are then presented will several default options corresponding to public domain license tags or a field to write in a custom tag:
It is unclear why these are the specific options presented; I do not know of the original discussion in which they were chosen. This RfC seeks to determine whether we should add or remove any of these options. I have added one proposal, but feel free to create subsections for others (using the format Add license name
or Remove license name
and specifying the proposed menu text). Sdkb talk 20:19, 19 December 2024 (UTC)
Add PD-textlogo
[edit]Should {{PD-textlogo}} be added, using the menu text Logo image consisting only of simple geometric shapes or text
? Sdkb talk 20:19, 19 December 2024 (UTC)
Support. Many organizations on Wikipedia that have simple logos do not have them uploaded to Commons and used in the article. Currently, the only way to upload such images is to choose the "enter a different license in wikitext format" option and enter "{{PD-textlogo}}" manually. Very few beginner (or even intermediate) editors will be able to navigate this process successfully, and even for experienced editors it is cumbersome. PD-textlogo is one of the most common license tags used on Commons uploads — there are more than 200,000 files that use it. As such, it ought to appear in the list. This would make it easier to upload simple logo images, benefiting Commons and the projects that use it. Sdkb talk 20:19, 19 December 2024 (UTC)
- Addressing two potential concerns. First, Sannita wrote,
the team is worried about making available too many options and confusing uploaders
. I agree with the overall principle that we should not add so many options that users are overwhelmed, but I don't think we're at that point yet. Also, if we're concerned about only presenting the minimum number of relevant options, we could use metadata to help customize which ones are presented to a user for a given file (e.g. a.svg
file is much more likely to be a logo than a.jpg
file with metadata indicating it is a recently taken photograph). - Second, there is always the risk that users upload more complex logos above the TOO. We can link to commons:TOO to provide help/explanation, and if we find that too many users are doing this for moderators to handle, we could introduce a confirmation dialogue or other further safeguards. But we should not use the difficulty of the process to try to curb undesirable uploads any more than we should block newcomers from editing just because of the risk they'll vandalize — our filters need to be targeted enough that they don't block legitimate uploads just as much as bad ones. Sdkb talk 20:19, 19 December 2024 (UTC)
- "we could use metadata" I'd be very careful with that. The way people use media changes all the time, so making decisions about how the software behaves on something like that, I don't know... Like, if it is extracting metadata, or check on is this audio, video, or image, that's one thing, but to say 'jpg is likely not a logo and svg and png might be logos' and then steer the user into a direction based on something so likely to not be true. —TheDJ (talk • contribs) 10:52, 6 January 2025 (UTC)
- Addressing two potential concerns. First, Sannita wrote,
Oppose. Determining whether a logo is sufficiently simple for PD-textlogo is nontrivial, and the license is already frequently misapplied. Making it available as a first-class option would likely make that much worse. Omphalographer (talk) 02:57, 20 December 2024 (UTC)
Comment only if this will result in it being uploaded but tagged for review. - Jmabel ! talk 07:14, 20 December 2024 (UTC)
- That should definitely be possible to implement. Sdkb talk 15:13, 20 December 2024 (UTC)
Support Assuming there's some kind of review involved. Otherwise
Oppose, but I don't see why it wouldn't be possible to implement a review tag or something. --Adamant1 (talk) 19:10, 20 December 2024 (UTC)
Support for experienced users only. Sjoerd de Bruin (talk) 20:20, 22 December 2024 (UTC)
Oppose peer Omphalographer ,{{PD-textlogo}} can use with a logo is sufficient simply in majority of countries per COM:Copyright rules (first sentence in USA and the both countries peer COM:TOO) my opinion (google translator). AbchyZa22 (talk) 11:02, 25 December 2024 (UTC)
Oppose in any case. We have enough backlogs and don't need another thing to review. --Krd 09:57, 3 January 2025 (UTC)
- How about we just disable uploads entirely to eliminate the backlogs once and for all?[Sarcasm] The entire point of Commons is to create a repository of media, and that project necessarily will entail some level of work. Reflexively opposing due to that work without even attempting (at least in your posted rationale) to weigh that cost against the added value of the potential contributions is about as stark an illustration of the anti-newcomer bias at Commons as I can conceive. Sdkb talk 21:36, 3 January 2025 (UTC)
Oppose. I think the template is often misapplied, so I do not want to encourage its use. There are many odd cases. Paper textures do not matter. Shading does not matter. An image with just a few polygons can be copyrighted. Glrx (talk) 19:47, 6 January 2025 (UTC)
Support adding this to the upload wizard, basically per Skdb (including the first two sentences of their response to Krd). Indifferent to whether there should be a review process: on one hand, it'd be another backlog that will basically grow without bound, on the other, it could be nice for the reviewed ones. —Mdaniels5757 (talk • contribs) 23:57, 6 January 2025 (UTC)
General discussion
[edit]Courtesy pinging @Sannita (WMF), the WMF community liaison for the Upload Wizard improvements project. Sdkb talk 20:19, 19 December 2024 (UTC)
- Thanks for the ping. Quick note: I will be on vacation starting tomorrow until January 1, therefore I will probably not be able to answer until 2025 starts, if needed. I'll catch up when I'll have again a working connection, but be also aware that new changes to code will need to wait at least mid-January. Sannita (WMF) (talk) 22:02, 19 December 2024 (UTC)
- Can we please add a warning message for PDF uploads in general? this is currently enforced by abuse filter, and is the second most common report at Commons talk:Abuse filter. And if they user pd-textlogo or PD-simple (or any AI tag) it should add a tracking category that is searched by User:GogologoBot. All the Best -- Chuck Talk 23:21, 19 December 2024 (UTC)
- Yes, please. Even with the abuse filter in place, the vast majority of PDF uploads by new users are accidental, copyright violations, and/or out of scope. There are only a few appropriate use cases for the format, and they tend to be uploaded by a very small number of experienced users. Omphalographer (talk) 03:11, 20 December 2024 (UTC)
- Can we please add a warning message for PDF uploads in general? this is currently enforced by abuse filter, and is the second most common report at Commons talk:Abuse filter. And if they user pd-textlogo or PD-simple (or any AI tag) it should add a tracking category that is searched by User:GogologoBot. All the Best -- Chuck Talk 23:21, 19 December 2024 (UTC)
Comment, the current version of the MediaWiki Upload Wizard contains the words "To ensure the works you upload are copyright-free, please provide the following information.", but Creative Commons (CC) isn't "copyright-free", it is a free copyright ©️ license, not a copyright-free license. I'm sure that Sannita is keeping an eye on this, so I didn't ping
herhim. It should read along the lines of "To ensure the works you upload are free to use and share, please provide the following information.". --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 12:19, 24 December 2024 (UTC)- @Donald Trung: Sannita (WMF) presents as male, and uses pronouns he/him/his. Please don't make such assumptions about pronouns. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:02, 24 December 2024 (UTC)
- My bad, I've corrected it above. For whatever reason I thought that he was a German woman because I remember seeing the profile of someone on that team and I probably confused them in my head, I just clicked on their user page and saw that it's an Italian man. Hopefully he won't feel offended by this mistake. Just saw that he's a fellow Whovian, but the rest of the comment remains unaltered as I think that the wording misrepresents "free" as "copyright-free", which are separate concepts. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 14:09, 24 December 2024 (UTC)
- (Hello, I'm back in office) Not offended at all, it happens sometimes on Italian Wikipedia too. Words and names ending in -a are usually feminine in Italian, with some exceptions like my name and my nickname that both end in -a, but are masculine. :) Sannita (WMF) (talk) 13:15, 2 January 2025 (UTC)
- Wiki markup: {{gender:Sannita (WMF)|male|female|unknown}} → male. Glrx (talk) 03:07, 3 January 2025 (UTC)
- (Hello, I'm back in office) Not offended at all, it happens sometimes on Italian Wikipedia too. Words and names ending in -a are usually feminine in Italian, with some exceptions like my name and my nickname that both end in -a, but are masculine. :) Sannita (WMF) (talk) 13:15, 2 January 2025 (UTC)
- My bad, I've corrected it above. For whatever reason I thought that he was a German woman because I remember seeing the profile of someone on that team and I probably confused them in my head, I just clicked on their user page and saw that it's an Italian man. Hopefully he won't feel offended by this mistake. Just saw that he's a fellow Whovian, but the rest of the comment remains unaltered as I think that the wording misrepresents "free" as "copyright-free", which are separate concepts. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 14:09, 24 December 2024 (UTC)
Unarchiving, as unclosed. Sdkb talk 07:50, 6 February 2025 (UTC)
Category naming for proper names
[edit]There are currently multiple CfD disputes on the naming of categories for proper names (Commons:Categories for discussion/2024/12/Category:FC Bayern Munich and Commons:Categories for discussion/2024/12/Category:Polonia Warszawa). The problem is caused by an unclear guideline. At COM:CAT the guideline says: "Category names should generally be in English. However, there are exceptions such as some proper names, biological taxa and names for which the non-English name is most commonly used in the English language". The first problem is that sometimes people do not notice that there is no comma before the "for" and think that the condition applies for all cases. This might also be caused by some wrong translations. The other problem is the "some" as there are no conditions defined when and when not this applies. I think we have four options:
- Translate all proper names
- Translate proper names when English version is commonly used (enwiki uses a translated name)
- Do not translate proper names but transcribe non Latin alphabets
- Always use the original proper name
Redirects can exist anyways. The question what to do with locations they have multiple official local names in multilingual regions is a different topic to be discussed after there is a decision on the main question. GPSLeo (talk) 11:40, 28 December 2024 (UTC)
- I don't think it's a bad thing that the rule gives room for case-by-case decisions. The discussions about this are very long, but it's rarely about a real problem with finding or organising content. So my personal rule would be something like ‘If it's understandable to an English speaker, is part of a subtree curated by other users on an ongoing basis, and you otherwise have no engagement in that subtree, don't suggest a move just because of a principle that makes no difference here.’ Rudolph Buch (talk) 14:37, 28 December 2024 (UTC)
- 100% That should be the standard. People are to
limp wristedweak when it comes to dealing with obviously disingenuous behavior or enforcing any kind of standards on here though. But 99% of time this is only a problem because someone wants to use category names as their personal nationalist project. It's just that no one is willing to put their foot down by telling the person that's not what categories are for. Otherwise this would be a nonissue. But the guideline should be clear that category names shouldn't be in the "native language" if it doesn't follow the category tree and/or is only being done for personal, nationalistic reasons. --Adamant1 (talk) 18:40, 28 December 2024 (UTC)
- 100% That should be the standard. People are to
- I think that at least in most cases the right answer is something like #2 except:
- I wouldn't always trust en-wiki to get it right, especially on topics where only one or two editors have ever been involved there, and we might well have broader and more knowledgeable involvement here.
- Non-Latin alphabets should be transliterated.
- The thing is, of course, that is exactly the one that frequently requires judgement calls, so we are back where we started.
- Aside: in my experience, some nationalities (e.g. German) have a fair number of people who will resist the use of an English translation no matter how common, while others (e.g. Romanian) will "overtranslate". On the latter, as an American who has spent some time in Romania, I'm always amazed when I see Romanians opt for English translations for things where I've always heard English-speakers use the Romanian (e.g. "Roman Square" for "Piața Romana"; to my ear, it is like calling the composer Giuseppe Verdi "Joseph Green"). - Jmabel ! talk 18:59, 28 December 2024 (UTC)
- I've made the sentence in COM:CAT, quoted by OP, into a list, to remove ambiguity. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:56, 12 January 2025 (UTC)
Oppose all four suggested solutions because they would be too disruptive, each. Several outcomes are possible: top-down regulations to be mass-executed by a handful of editors who could instead do more meaningful work, while a large group of editors raises up in protests against a consensus that they didn't participate in (see recently: "Historical images of..."), is one possible outcome. Another possibility is a toothless rule that is generally ignored in practice, except when it can be wielded to cudgel others.
The current rule for proper names is sufficient and remains flexible enough to be handled by contributors of all kinds. My arguments against each of the four general ideas: Solution 1 and 2 are hardheaded English WP supremacy, those should be discarded right away, Commons is a multilingual community project. Solution 3 sounds best at first, but it is again inflexible: Non-Latin category names exist by the thousands - Cyrillic and East Asian publications most prominently (Category:武州豊嶋郡江戸庄図, note how the Japanese title uses Kanzi), and it's not as if transliterated-to-Pinyin Chinese is easier to understand for non-writers. That means (imo) that on the lowest levels in the category tree, native names should be allowed in whatever script, as long as the generic parent categories like "Category:Books from Russia about military" that would be used to navigate the cat-tree are still in English. Regarding solution #4: "Always proper names" could be interpreted by some to raise language barriers against foreign editors on a much higher level: I prefer to find Chinese provinces under English categories names like "Anhui" and "Guangdong", not as Category:北京. I prefer Arabic personal names transliterated (by whatever method, even), and so on. --Enyavar (talk) 22:10, 27 January 2025 (UTC)
RfC: Should Commons ban AI-generated images?
[edit]![]() | An editor has requested comment from other editors for this discussion. If you have an opinion regarding this issue, feel free to comment below. |
Should Commons policy change to disallow the uploading of AI-generated images from programs such as DALLE, Midjourney, Grok, etc per Commons:Fair use?
Background
[edit]AI generated images are a big thing lately and I think we need to address the elephant in the room: they have unclear copyright implications. We do know that in the US, AI-generated images are not copyrighted because they have no human author, but, they are still very likely considered derivative works of existing works.
AI generators use existing images and texts in their datasets and draw from those works to generate derivatives. There is no debate about that, that is how they work. There are multiple ongoing lawsuits against AI generator companies for copyright violation. According to this Washington Post article, the main defense of AI generation rests on the question of if these derivative works qualify as fair use. If they are fair use, they may be legal. If they are not fair use, they may be illegal copyright violations.
However, as far as Commons is concerned, either ruling would make AI images go against Commons policy. Per Commons:Fair use, fair use media files are not allowed on Commons. Obviously, copyright violations are not allowed either. This means that of the two possible legal decisions about AI images, both cannot be used on Commons. There is no possible scenario where AI generated images are not considered derivative in some way of copyrighted works; it's just a matter of if it's fair use or not. As such, I think that AI-generated images should be explicitly disallowed in Commons policy.
Discussion
[edit]Should Commons explicitly disallow the uploading of AI-generated images (and by proxy, should all existing files be deleted)? Please discuss below. Di (they-them) (talk) 05:00, 3 January 2025 (UTC)
- Enough. It is a great waste of time to have the same discussion over and over and over. I find it absurd to think that most AI creations are going to be considered derivative works. The AI programs may fail that test, but what they produce clearly isn't. Why don't we wait until something new has happened in the legal sphere before we start this discussion all over?--Prosfilaes (talk) 06:21, 3 January 2025 (UTC)
OpposeNo, it shouldn't and they are not derivative works and if they are uploaded by the person who prompted them they also are not fair use but PD (or maybe CCBY). They are not derived from millions of images, like images you draw are not "derived" from public works you previously saw (like movies, public exhibitions, and online art) that inspired or at least influenced you.
There is no debate about that, that is how they work.
False.the main defense of AI generation rests on the question of if these derivative works qualify as fair use.
Also false. Prototyperspective (talk) 09:52, 3 January 2025 (UTC)
- Most AI-generated images, unless the AI is explicitly told to imitate a certain work, are not "derivative works" in the sense of copyright, because the AI does a thing similar to humans when they create new works: Humans have knowledge of a lot of pre-existing works and create new works that are inspired by them. AI, too, "learns" for example what the characteristics of Impressionist art are through the input of a lot of Impressionist paintings, and is then able to create a new image in Impressionist style, without that image being a derivative work of any specific work where copyright regulations would apply - apart from the fact, of course, that in this specific example, most of the original works from the Impressionist period are public domain by now anyway. The latter would also be an argument against the proposal: Even if it were the case that AI creates nothing but "derivative works" in the sense of copyright, derivative works of public domain original art would still be absolutely fine, so this would be no argument for completely banning AI images. Having said all that, I think that we should handle the upload of AI images restrictively, allow them only selectively, and Commons:AI-generated media could be a bit stricter. But a blanket ban wouldn't be a reasonable approach, I think. Gestumblindi (talk) 11:12, 3 January 2025 (UTC)
- We want images for a given purpose. It's a user who uploads such an image. He is responsible for his work. We shouldn't care how much assistance he had in the creation process. But I'd appreciate an agreement on banning photorealistic images designed for deceiving the viewer. AI empowers users to create images of public (prominent) people and have these people appear more heroic, evil, clean, dirty, important or whatever than they are. But we have this problem with photoshop already. I don't want such images in Wikimedia even if most people know a given image to be a hoax (such as those of Evil Bert from sesame street). Vollbracht (talk) 01:42, 4 January 2025 (UTC)
- This discussion isn't about deception or usefulness of the images, it's about them being derivative works. Di (they-them) (talk) 02:12, 4 January 2025 (UTC)
- You got the answer on "derivative works" already. I can't see a legal difference between a photoshopped image and an image altered by AI or a legal difference between a paintbrush artwork and an AI generated "artwork". Still as Germans say: "Kunst kommt von können." (Art comes from artistic abilities.) It's not worth more than the work that went into it. If you spend no more than 5 min. "manpower" in defining what the AI shall generate, you shouldn't expect to have created something worthy of any copyright protection or anything new in comparison to an underlying work of art. We don't need more rules on this. When deriving something keep the copyright in mind - no matter what tool you use. Vollbracht (talk) 03:34, 4 January 2025 (UTC)
- This discussion isn't about deception or usefulness of the images, it's about them being derivative works. Di (they-them) (talk) 02:12, 4 January 2025 (UTC)
- Look at other free-upload platforms and you get to the inevitable conclusion that AI uploads will ultimately overwhelm Commons by legal issues or sheer volume. Because people. But with no new legal impulses and no cry for action from tech Commons, I see no need for a new discussion at this point. Alexpl (talk) 05:59, 4 January 2025 (UTC)
- As I understand it, there are three aspects of an AI image:
- The creations caused by the computer algorithm. Probably not copyrighted anywhere because an algorithm is not an animal.
- An AI prompt, entered by a human. This potentially exceeds the threshold of originality, in which case the AI output probably is a derivative work of the prompt. Maybe we need a licence of the AI prompt from the person who wrote it, unless the prompt itself is provided and determined to be below the threshold of originality.
- Sometimes an AI image or text is a derivative work of an unknown work which the AI software found somewhere on the Internet. Here it might be better to assume good faith and only delete if an underlying work is found. --Stefan2 (talk) 11:54, 7 January 2025 (UTC)
- Re 2: note that short quotes can also be put onto Wikipedia which is CCBY-SA and Wikiquote. Moreover, that applies to the prompt, but media files can also be uploaded without input prompt attached. In any case, if the prompt engineer licenses the image under CCBY or PD then it can be uploaded and I only upload these kind of AI images even if further may also be PD. Re 3: that depends on the prompt, if you're tailoring the prompt in some specific way so it produces an image like that then it may create an image looking very similar...e.g. if you prompt La Vie, 1903 painting by Pablo Picasso, in the style of Pablo Picasso, the life it's likely produce an image looking like the original. I also don't think that it would be good to assume that active contributors would without disclosing it do so. Prototyperspective (talk) 12:16, 7 January 2025 (UTC)
- If you ask for
La Vie, 1903 painting by Pablo Picasso, in the style of Pablo Picasso, the life
, then you are very likely to get a derivative work. - If you ask for
a picture of a cat
, then there is no problem with #2, but you have no way of knowing how the AI tool produced the picture, so you are maybe in violation of #3 (you'll find out if the copyright holder sues you). --Stefan2 (talk) 12:53, 7 January 2025 (UTC)
- If you ask for
Oppose Whatever the details of AI artwork and derivatives are, there's a serious lack of people checking for copyright violations to begin with and anyone who tries to follow any kind of standards when it comes to AI artwork just get cry bullied, threatened, and/or sanctioned for supposedly causing drama. So there's really no point in banning it or even moderating in any way what-so-ever to begin with. The more important thing is properly labeling it as such and not letting people pass AI artwork off on here as legitimate, historically accurate images. The only other alternative would be for the WMF to take a stance on it one or another, but I don't really see that happening. There's nothing that can or will be done about all the AI slop on here until then though. --Adamant1 (talk) 06:51, 9 January 2025 (UTC)
- Conditional
Support. I do not support an outright and total ban of any and all AI generated imagery (in short: AI file) on Commons, that's going too far. But I would support a strict enforcement and an strict interpretation of our scope policy in regards to such imagery. By that, I mean the following.
- I support the concept that any upload of AI generated imagery has to satisfy the existence and demonstration of a concise and legitimate use case on Wikimedia projects before uploading the data on Commons. If any AI file is not used, then it's blanketly out of scope. Reasoning: Most Wikimedia projects have a rule of only hold verifiable information. AI files have a fundamental issue with this requirement of verifiability, as the LLM models (Large Language Models) used do not allow for a correlation between input and output. This is exemplified by the inability of the LLM creators to remove the results of rights infringing training data from the processing algorithms, they can only tweak the output to forbid the LLM outputting infringing material like song or journalistic texts.
- I support a complete ban of AI generated imagery that depicts real-life celebrities or historical personnages. For celebrities, the training data is most likely made of copyrighted imagery, at least partly. For historical personnages, AI files will likely deceive a viewer or reader in that the AI file is historically accurate. Such a result, deceiving, is against our project scope, see COM:EDUSE.
- I support the notion of using AI files to illustrate concepts that fall within the purview of e.g. social sciences. I could very well see a good use case to illustrate e.g. poverty, homelessness, sexuality topics and other potentially contentious themes at the discretion of the writing Wikipedian. AI files may offer the advantage in that most likely no personality rights will get touched by the depiction. For this use case, AI files would have to strictly satisfy our COM:Redundant policy: as soon as there is an actual human made media file, a photograph, movie or sound recording that actually fulfils the same purpose as the AI file, then the AI file gets blanketly out of scope.
- I am aware that these opinions are quite strict and more against AI generated imagery. That's due to my background thoughts about the known limitations of generative software and a currently unclear IP right situation about the training data and the output of these LLM. I lack the imagination on how AI files could currently serve to improve the mission of disseminating knowledge, save for some limited use cases. Regards, Grand-Duc (talk) 19:07, 9 January 2025 (UTC) PS. For further reference: Commons:Deletion requests/Files in Category:AI-generated portraits.
- Re 1.: some people complain that people upload images without use case, other people complain when people add useful media they created themselves to articles – it's impossible to make it right. Moreover, Commons isn't just there as a hosting site for Wikipedia & Co but also a standalone site. Your point about LLM is good and I agree but this discussion is not about LLMs but AI media creation tools.
- Re 2.: paintings are also inaccurate. Images made or modified with AI (or made with AI and then edited with eg Photoshop) are not necessarily inaccurate. I'm also very concerned about the known limitations of generative software but that doesn't really support your points and doesn't support that Commons should censor images produced with a novel toolset. Prototyperspective (talk) 19:34, 9 January 2025 (UTC)
- All the AI media creation tools, be it Midjourney, Grok, Dall-E and the plethora of other offerings are based upon LLM. So, any discussion about current "AI media creation tools" is the same as discussing the implications of LLM in practice, IP law and society. And yes, Commons wants to also serve other sites and usages (like school homework for my son, did so in the past and will do in the future). But as anybody may employ generative AI, there is no need to use Commons to endorse any and all potential use - as I tried to demonstrate, AI files are only seldom useful to disseminate knowledge, see Commons:Project scope.
- Paintings are often idealized, yes, introducing inaccuracies. But in that case, the work is vouched for by a human artist, who employed his creativity and his knowledge based upon the learnings in his life to produce a given result. These actions cannot be duplicated at the moment by generative AI, only imitated. And while mostly educated humans will recognize a painting as a creation of a fellow human that will certainly contain inaccuracies, the stories about "alternative facts", news bubbles, deepfakes etc. show that generative AI products are often neither recognized as such and taken at face value. Regards, Grand-Duc (talk) 19:56, 9 January 2025 (UTC)
- No, those are not the same implication. You however got closer to understanding the concept and basics of prompt engineering which is about getting the result you intend or imagined despite all the flaws LLMs have.
People have developed all sorts of techniques and tricks to make these tools produce the images they have in mind at a quality they'd like to have them. If you think people ask AI generator tools to illustrate a subject by just providing the concept's name like "Cosmic distance ladder" and then assuming it produces an accurate good image showing that you'd be wrong. Moreover, most AI images do look like digital art and not photos and are generally labelled as such. Prototyperspective (talk) 22:05, 9 January 2025 (UTC)
- No, those are not the same implication. You however got closer to understanding the concept and basics of prompt engineering which is about getting the result you intend or imagined despite all the flaws LLMs have.
- Oppose per Prosfilaes it is not at all guaranteed that we're in a Hobson's choice here. Some AI images may well be bad, but banning them all just in case is ridiculous. --GRuban (talk) 21:53, 9 January 2025 (UTC)
Support with the possible exception of images that are themselves notable. Blythwood (talk) 20:22, 11 January 2025 (UTC)
- Mostly support, but not for the reasons proposed. While I don't disagree with the argument that AI-generated content could potentially be considered a derivative work, this argument isn't currently supported by law, and I don't think that's likely to change in the near future. However, very few AI-generated images have realistic educational use. Generated images of real people, places, or objects are inherently inferior to actual photos of those subjects, and often contain misleading inaccuracies; images of speculative topics tend towards clichés (like holograms and blue glowing robots for predictions of the future, or ethnic stereotypes for historical subjects); and AI-generated diagrams and abstract illustrations are inevitably meaningless gibberish. The vast majority of AI-generated images inserted into Wikimedia projects are rejected by editors on those projects and subsequently deleted from Commons; those that aren't removed tend to be more the result of indifference (especially on low activity projects) than because they actually provide substantial educational value. Omphalographer (talk) 20:15, 20 January 2025 (UTC)
Oppose - No evidence there is actual legal risk. The U.S. Copyright office has declared many times now that A.I. generated images are not copyrighted unless they are clearly derivative works. Any case of an image being a derivative work needs to be handled on a case-by-case basis, just like any other artwork. If Commons is actually concerned about copyright issues with derivative works, we need to delete about a thousand cosplay images first. (No, I'm not saying that all cosplay images are copyrighted derivative works, but a lot of them are.) Nosferattus (talk) 02:02, 21 January 2025 (UTC)
Oppose if a replacement sister project is not established.
Support if a sister project like meta:Wikimedia Commons AI is introduced and AI images can be moved to that project. As I already stated in 2024, I believe that AI-generated images and human-made images should be kept separate in order to protect, preserve, defend and encourage the integrity of purely human creativity. S. Perquin (talk) 20:33, 22 January 2025 (UTC)
Oppose A blanket ban because "If they are fair use, they may be legal. If they are not fair use, they may be illegal copyright violations." is not accurate: as courts have already found in the United States, intellectual property such as a copyright can only be applied to a human person intentionally making a creative work, not a software process or an elephant or a hurricane in a paint store. Individual AI-generated works may well be copyright infractions, but that would be for the same reasons as if a human person made a work that was influenced by existing copyrighted works, such as being virtually identical to the source work. I am not a lawyer and nothing I write here or anywhere should be taken as any kind of proper financial, legal, or medical advice. —Justin (koavf)❤T☮C☺M☯ 22:54, 22 January 2025 (UTC)
- A blanket ban for copyright reasons would likely encompass a number of uses that would not violate copyright. If this is unwarranted, there may be smaller categories that are more reasonable to consider per Commons:PRECAUTIONARY. For example, AI images of living individuals we do not have a free copy of, might be one area this could apply. I recall there was an AI image of a en:Brinicle discussed previously and deleted, which very obviously resembled the BBC footage of a Brinicle, likely as that was the first ever footage of this phenomena and remains part of a very limited set, but it's hard to work that into a general prohibition. CMD (talk) 02:28, 23 January 2025 (UTC)
- @Koavf: One issue with AI generated images is that we don't usually have a way of knowing the country of origin and they are currently copyrighted in the United Kingdom. Although they are PD in the United States. The issue is that policy requires something not be copyrighted in the country of origin, not just the United States. That's not to say AI generated images can't just be nominated for deletion on a "per image" basis when (or if) it's determined if said image was created in a country where they are copyrighted, but that goes against Commons:PRECAUTIONARY and no other images get a free pass from it in the same way that AI generated artwork seems to. I. E. some people have made the argument that there doesn't need to be a source for AI generated images "because AI", which clearly goes against the guidelines. Not to say I think there should be a blanket ban either though but there should at least be more scrutiny when it comes to where AI artwork on here originates from and enforced of the Commons:PRECAUTIONARY when it isn't clear. --Adamant1 (talk) 08:13, 23 January 2025 (UTC)
Oppose nothing new provided here. Until there is a broad legal consensus they’re copyright violations, they’re legal. And we can’t ban an entire medium just because there’s a lot of justifiable controversy around it— how are we supposed to illustrate DALL-E itself in that case? Dronebogus (talk) 09:08, 23 January 2025 (UTC)
- Like Grand-Duc, Omphalographer and others said, I think it's better to argue from a COM:SCOPE standpoint. I think it's worthwhile to add illustrative examples to the said policy or to a subpage of it, if necessary - examples where an AI image is unlikely to be in scope, perhaps along with other similar materials like amateur artworks. --whym (talk) 08:56, 25 January 2025 (UTC)
Oppose per Blythwood. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 11:39, 25 January 2025 (UTC)
Oppose a blanket ban as per the OP; illustrations made by AI are potentially useful. However if we want to keep them, AI creations have to adhere to the Commons:Scope and should be judged more critically than other content. AI images that are just created and uploaded for no educative purpose, should get deleted, especially if someone makes them en masse. AI images intended for misinformation should also lead to user bans (on repeated offenses after fair warnings). Best, --Enyavar (talk) 22:23, 27 January 2025 (UTC)
Support ban against AS (Artificial Stupidity), alternatively strict limitations:
- only users with autopatrol right may upload AI/AS-generated files
- upoad maximally 2 AI/AS-generated files per 24 hours
- Mass uploading of nonsense has become a big problem and waste of resources. Taylor 49 (talk) 00:03, 5 February 2025 (UTC)
- A skilled smart person aware of the issues & limitations can still use stupid tools to produce good results. I see neither mass uploading nor a big problem so far with AI images. Prototyperspective (talk) 00:11, 5 February 2025 (UTC)
Comment Really? Commons:Deletion requests/Files uploades by User:DenisMironov1 Taylor 49 (talk) 00:28, 5 February 2025 (UTC)
- In rare cases people upload a medium-sized batch of AI images. The filetitle already informs it's AI-generated. People DR'd it and it's gone.
No issue at all as far as I can see and less problematic than people uploading 100 20MB photos of the same mundane subject for which there already are >200 varied pics or 100 separate scans of bookpages instead of a pdf file. There aren't many cases of people uploading mid or large numbers of lowquality AI files and it's easily dealt with. I've been checking recent AI uploads for a long time and it on avg was only a few files per few days so moving slowly and easy to organize. Prototyperspective (talk) 18:09, 5 February 2025 (UTC)
- In rare cases people upload a medium-sized batch of AI images. The filetitle already informs it's AI-generated. People DR'd it and it's gone.
Comment I'd like to add some info to the discussion. Recently, Meta Inc. (the owner of Facebook) got caught torrenting from a shadow library, Anna's archive in occurrence: https://torrentfreak.com/meta-torrented-over-81-tb-of-data-through-annas-archive-despite-few-seeders-250206 for the purpose of training their models. So, it's IMHO not far-fetched to say that any and all AI generated media is akin to the Fruit of the poisonous tree - unless proven good, it may be sensible to assume that they would be likely a copyright violation. Regards, Grand-Duc (talk) 18:07, 10 February 2025 (UTC)
- If you ever watched some pirated films or read a few downloaded books of a genre from which you learn or get inspiration from you can still produce a film or image of the same genre that is not a copyright violation. Prototyperspective (talk) 19:11, 10 February 2025 (UTC)
- Yes, but "inspiration" is, in the same vein as curiosity and creativity, a purely human behavioural trait (well, simians, Corvidae, Psittacidae and several Odontoceti show these too). No machine is able to replicate that, so machine-processed copyvio data remain a copyvio. Regards, Grand-Duc (talk) 19:32, 10 February 2025 (UTC)
- Yes, machines / software are different from humans. That doesn't change anything about the point made. Machines can machine-learn from copyright-restricted content and then produce free content as much as humans can biological-learn from copyright-restricted content and then produce free content if you prefer me to be more precise with the terminology. Prototyperspective (talk) 19:40, 10 February 2025 (UTC)
- Yes, but "inspiration" is, in the same vein as curiosity and creativity, a purely human behavioural trait (well, simians, Corvidae, Psittacidae and several Odontoceti show these too). No machine is able to replicate that, so machine-processed copyvio data remain a copyvio. Regards, Grand-Duc (talk) 19:32, 10 February 2025 (UTC)
Require VRT permission from nude models
[edit]There are currently many cases of nude models requesting the deletion of photos where they are visible. We do not have a clear policy how to handle such cases and every solution has problems. I want to propose a new process to minimize this problem for future uploads.
I would propose a new guideline like the following:
"Photos of nude people need explicit permission from the model verified through the VRT process. This applies to photos of primary genitalia and photos of identifiable people in sexually explicit/erotic poses also if only partial nude. This also applies to photos form external sources with an exception for trustworthy medical journals or similar. This does not apply to public nudity at protests, fairs and shows where photographing was allowed. For photos of such events only the regular rules on photos of identifiable people apply. This applies to all photos uploaded after Date X. Within the process the people are reminded that the permission is irrevocable. Having such permission does not automatically put the photo within the scope."
As I think that would not be more than a hand full of cases per month I think this could be handled by the VRT team. If this new task is a problem for the VRT we could also ask if the T&S team could help in this sensitive area. GPSLeo (talk) 10:09, 11 January 2025 (UTC)
- I thought the GDPR right "to be forgotten" makes a "irrevocable" model release impossible? What would such a guideline mean for fotos from pride parades? At pride parade there are regularly people with visible primary genitalia. C.Suthorn (@Life_is@no-pony.farm - p7.ee/p) (talk) 11:33, 11 January 2025 (UTC)
- I think there is no higher court decision on "model contract" vs. "right to be forgotten" but I would assume that the model contract is the superior right. If otherwise we would already have cases of known movies where some actors got themself removed from the movie. I will add a sentence on public nudity. I had this in mind but then forgot it when writing the draft. GPSLeo (talk) 11:41, 11 January 2025 (UTC)
Before putting new tasks on the VRT, please consider to speak with the VRT. Their current policy is not to process any personality rights releases, which also included model contacts, not least because they are unable to reasonably verify such releases. --Krd 11:50, 11 January 2025 (UTC)
- I am aware that this is often more complicated than for copyright. But I think it is better to make a "delete if not verified policy" instead of keep everything and handle all the removal requests they also require identity confirmation. GPSLeo (talk) 12:07, 11 January 2025 (UTC)
- How many removal requests have there been in the last 2 years? Krd 12:29, 11 January 2025 (UTC)
- I do not know how many cases were handled privately by VRT and Oversight but for the cases starting as regular deletion requests I would estimate around ten to twenty cases in the last two years. GPSLeo (talk) 12:55, 11 January 2025 (UTC)
- No offence, but can we make sure we are addressing a problem, and not a non-problem, before we make such expensive approach? I for sure don't see all such VRT requests, but I think I see at least half of them, and I have no memory of any relevant issue. If they happen, they are mostly such cases where consent initially was given and is going to be revoked later, which is a situation other addressed by the proposal.
- Who is going to ask the oversighters, so that we know what we are talking about? Krd 17:26, 11 January 2025 (UTC)
- I do not know how many cases were handled privately by VRT and Oversight but for the cases starting as regular deletion requests I would estimate around ten to twenty cases in the last two years. GPSLeo (talk) 12:55, 11 January 2025 (UTC)
- How many removal requests have there been in the last 2 years? Krd 12:29, 11 January 2025 (UTC)
The proposal seems sensible to me, as long as the VRT would be actually willing to handle such permissions, see Krd's comment. I would, however, add something exempting historical photographs too (for example, the photographs in Category:19th-century photographs of nude people), or photographs of now deceased people in general (photos taken when the person was alive). In Switzerland, for example, the "right to one's own picture" (Recht am eigenen Bild) basically ends with death, see de:Recht am eigenen Bild (Schweiz) and can't be claimed by family members; in Germany, family members can claim it for up to 10 years after the person's death (per de:Postmortales Persönlichkeitsrecht). Gestumblindi (talk) 14:41, 11 January 2025 (UTC)
- I’d say if a nude model legitimately requests their picture be taken down, we just take it down; requiring VRT for each and every non-historical, non-nudist, non-self-shot photo of a nude person seems tedious and unnecessary. 10-20 cases is a non-trivial amount, but I’d think it’s a pretty small percentage of all the nude photography we host here. Dronebogus (talk) 15:24, 11 January 2025 (UTC)
Something like this may be reasonable, but the considerations raised by User:Gestumblindi and User:Dronebogus are relevant. To list three exceptions I see:
- Historical photos, especially photos that were routinely published in their own era and whose copyrights have now expired. E.g. I cannot imagine doubting appropriate consent on a nude photo of actress Louise Brooks.
- Photos from societies and cultures where what is in the West considered "nudity" is simply considered normal (e.g. places in Africa or Pacific Islands where women do not routinely cover their breasts).
- Photos taken at public events in countries where appearing in public is de facto consent for photography. E.g. the many people who appear naked at the Folsom Street Fair, or Fremont Solstice Parade, or Mardi Gras in New Orleans. It is not practical to get VRT from a person walking by in a parade, nor do I see any need to do so in a situation where they have no legal expectation of privacy.
I would not be surprised if there are other equally strong cases for exceptions. - Jmabel ! talk 19:50, 11 January 2025 (UTC)
- Yes, the part for historical photos should definitely be added and defined in a very broad sense (all photos older then 25? years). The second point is the reason why I made the complicated definition to exclude female breasts in non sexual contexts from the guideline. GPSLeo (talk) 20:37, 11 January 2025 (UTC)
- The proposal was about primary genitalia. Now you introduce secondary gender specific body parts like breasts or a beard. There are societies that forbid a man to show a shaven face. Should we also require VRT permission for images of iranien or afghan men without a beard? C.Suthorn (@Life_is@no-pony.farm - p7.ee/p) (talk) 23:10, 11 January 2025 (UTC)
- The proposal would create very big issues (increase in work for VRT, increase of DRs towards nude pictures) to potentially solve very few (almost non-existent) issues. Christian Ferrer (talk) 08:41, 12 January 2025 (UTC)
Oppose A solution in search of a problem. The Squirrel Conspiracy (talk) 11:28, 12 January 2025 (UTC)
Comment I'm very active in VRT and I've never see a case like this -nude models requesting the deletion of photos where they are visible-. I think we can handle it when the moment arrives. --Ganímedes (talk) 22:49, 12 January 2025 (UTC)
Oppose Unneeded and would cause a increase in deletion requests and a increase in work for VRT Isla (talk) 23:08, 14 January 2025 (UTC)
Support Assuming Jmabel's suggestions are implemented if it passes. Regardless, this seems like a reasonable proposal and I don't really think the arguments against it are compelling. God forbid the VRT team has to do a couple of more VRT permissions every now and then. --Adamant1 (talk) 03:16, 16 January 2025 (UTC)
- It's not about workload; the comment above was
"Their current policy is not to process any personality rights releases, which also included model contacts, not least because they are unable to reasonably verify such releases."
Unless this issue is adequately addressed, the proposal is a non-starter. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:12, 16 January 2025 (UTC)
- It's not about workload; the comment above was
- Fair enough. I must have missed that. I agree the proposal is probably a non-starter if its not worked out though. --Adamant1 (talk) 13:29, 16 January 2025 (UTC)
- If we see a need for something we are currently not able to do we have to show this to get support from the WMF. The WMF will only help us finding a solution if there is a consensus in the community that there is need for this. We need the community decision that there is a need before we can talk about finding solutions. GPSLeo (talk) 13:41, 16 January 2025 (UTC)
- Fair enough. I must have missed that. I agree the proposal is probably a non-starter if its not worked out though. --Adamant1 (talk) 13:29, 16 January 2025 (UTC)
Oppose mandatory VRT permission. My proposal is: if a model makes a legitimate request to remove an image, we remove it, no questions asked. Dronebogus (talk) 18:27, 16 January 2025 (UTC)
- As far as your proposal is concerned, I suspect that's more or less the case already, if someone knows who or where to ask - but I'd absolutely support a more substantial proposal to make that an official policy, and to make it better known. Omphalographer (talk) 02:41, 17 January 2025 (UTC)
- Commons:Contact us/Problems mentions info-commons for issues about "Images of yourself". So in a sense, it's already handled in VRT, or at least our documentation says so.
- We have 2 Commons-related community queues in VRT: info-commons mentioned in Commons:Contact us/Problems and permissions-commons described in COM:VRT. The latter page might make it look like permissions-commons=VRT, but that is not true. whym (talk) 10:32, 18 January 2025 (UTC)
- As far as your proposal is concerned, I suspect that's more or less the case already, if someone knows who or where to ask - but I'd absolutely support a more substantial proposal to make that an official policy, and to make it better known. Omphalographer (talk) 02:41, 17 January 2025 (UTC)
Support with Jmabel's exceptions. Nosferattus (talk) 02:07, 21 January 2025 (UTC)
Enable two new tools
[edit]Hi! As part of a project with User:Scann (WDU), I developed two new tools to edit Commons:
- AddFileCaption - Adds captions to the structured data of files in a given category
- AddFileDescription - Adds descriptions to the files in a given category
The tools are are already enabled in MediaWiki.org and eswiki, and we'd like to enable them in Commons too. For that, I'd need to add two new gadgets (technical details explained on the links above and here). I have the necessary permissions (I'm a global interface editor) but would like to ask the community for support, questions, ideas or concerns. Thanks! Sophivorus (talk) 13:30, 21 January 2025 (UTC)
- Hi! Thanks for the mention. Just to clarify, this work was funded by Wikimedistas de Uruguay. Hope the community finds the tools useful! Scann (WDU) (talk) 13:52, 21 January 2025 (UTC)
Support Please them both autopatrolled only to avoid issues with SDC vandalism. All the Best -- Chuck Talk 19:24, 21 January 2025 (UTC)
Support Looks interesting. Although I agree with Alachuckthebuck that probably only autopratrollers should be able to use the gadgets. --Adamant1 (talk) 18:10, 22 January 2025 (UTC)
Support I don't think it needs to be autopatrolled however: these are captions and descriptions, and until it becomes a problem -- adding requirements like autopatrolled prevents it from being useful for campaigns and other newcomer activities. Like the ISA tool, the way to prevent bad behavior on SDC, is training the people using it, not limiting who can use it, Sadads (talk) 12:34, 24 January 2025 (UTC)
Comment The tools should work the same way the ISA tool does, anyone that has an account can use it. These are not massive edits, there's no reason or purpose to limit who can use them -- they are precisely to make the Wikimedia Commons interface more intelligible and easier to use for newcomers. Scann (WDU) (talk) 12:49, 24 January 2025 (UTC)
- It's usually just better to limit something new to a specific group of editors until it's been tested. That's less of an issue in this case since the tools are already in use on other projects though. --Adamant1 (talk) 12:52, 24 January 2025 (UTC)
- The ISA tool is exactly an example for a tool that created lots of bad edits because people used it without reading the guidelines. GPSLeo (talk) 13:05, 24 January 2025 (UTC)
Support Very useful tools. As a campaign organizer I'd love to have these tools at hand to invite people to participate in new ways and engage with SDC, which is a powerful but neglected way of collaboration and engagement for new comers. That's why I think these tools should be available to everyone. The power that they potentially might give to bad actors for making vandalism, will be balanced giving anyone the chance to fix problems, and overall, to increase and improve the use of SDC. Mariana Fossatti (WK?) (talk) 16:44, 24 January 2025 (UTC)
Done Well, I just enabled both tools, see Template:Add file caption and Template:Add file description. I also went ahead and added a "group" parameter that basically allows to limit the use of the tool to some specific group (e.g. user, autoconfirmed, autopatrolled, etc). The current default is empty (no restriction), but if vandalism starts to occur, the default can be set to some group. Hope they bring many valuable edits, cheers! Sophivorus (talk) 15:51, 31 January 2025 (UTC)
Major damage to Wikimedia Commons
[edit]As far as I can see, major damage has successively been done to Wikimedia Commons over the last few years by chopping up categories about people into individual "by year" categories making it
- virtually impossible to find the best image to use for a certain purpose, and
- virtually impossible to avoid uploading duplicates since searching/matching imges has become virtually impossible.
Here is a perfect exsmple. I have a really good, rare picture of her, but I'll be damned if I'm willing to wade through all the "by-year" categories to try to see if Commons already has it. The user who uploaded this didn't even bother to place it in a personal category. Why should they, with all the work required to try to find the category at all & fit the image in there?
I am mot objecting to the existence of categories "by year", Searching is the problem.
What if anything can be done about this mess which is steadliy getting worse all the time? Could some kind of bot fix it?
I really feel that this is urgent now and cannot be ignored any longer. The project had become worth much much much less through the problem described. Or have I missed/misunderstood something here? SergeWoodzing (talk) 10:00, 23 January 2025 (UTC)
- This is a duplicate discussion of Commons:Administrators' noticeboard#URGENT! Major damage to this project. CMD (talk) 12:22, 23 January 2025 (UTC)
- Yes, but the user was told there to bring it here. - Jmabel ! talk 17:28, 23 January 2025 (UTC)
- Contemporary VIP´s produce a ton of images. Sorting them by year makes sense - otherwise you would have to deal with hundreds of files in one category. As for Wikipedia: Go to the most recent useable photo of "Sophie" and use that. And if it is not the most flattering... well that´s life. Alexpl (talk) 12:33, 23 January 2025 (UTC)
- Uh, I don't think that's really what we ought to do. I've tried for many years always to use the best possible images. --SergeWoodzing (talk) 17:52, 23 January 2025 (UTC)
- It feels like the fear is a bit too huge to me, but (if you're looking for images of Donald Trump), you can enter deepcategory:"Donald Trump by year" in the regular search for example, et voilá! You can see many Donald images at once without looking into each subcat :) (Also see COM:Search for tags and flags) --PantheraLeo1359531 😺 (talk) 12:51, 23 January 2025 (UTC)
- Splitting images by year is often counterproductive, but that's necessary when there are a lot of them for one person. Yann (talk) 15:45, 23 January 2025 (UTC)
- See Help:Gadget-DeepcatSearch and TinEye image reverse search among other things. Prototyperspective (talk) 16:04, 23 January 2025 (UTC)
Please! I have not suggested that images should not (also) be sorted by year, so there is no need to defend that kind of sorting. I've asked for a search remedy & will now try the tips we've been given here. Thank you for them! --SergeWoodzing (talk) 17:52, 23 January 2025 (UTC)
- @SergeWoodzing: Another solution I've seen is addition of a flat category for all of a topic's files, to achieve your purpose but still allow for the granularity that others like to achieve. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 18:31, 23 January 2025 (UTC)
- I agree with the general issue brought up by SergeWoodzing. "By year" in many cases only makes sense if a large number of files cannot be meaningfully sorted in another way. Non-recurring events, certainly. But things that undergo little changes from year to year, do not necessarily need atomized categories, and I also noticed that by-year categories are steady on the rise since 2018. Not always for the better. The following examples are not what literally exists right now, but it would be easy to find real cases just like them.
- I have recently encountered more and more "books about <topic> by year", and that means that either a very broad topic like "biology" is split up by year (which later makes it much harder to split the topic by "botany" vs. "zoology", or as "by countinent", "by language", or to meaningfully search the publications) or that a very narrow topic like "American-Mexican War" gets splintered into single-file categories. How can two books about the same war be categorically different because the one was published in December 1857 and the other in March 1858?
- "Maps by year" are my pet peeve. Nearly all maps before the 21st century had many-years-long production processes. The further back we go in time, the less the publication year of an (old) map matters, the categories should rather differentiate the location and topic, not the day/month/year of publication. An old city plan of Chennai, an old topo map of Rajasthan, and an old geological map of Bengal all from the same year, have so little in common that it is pointless to primarily group them under "<year> maps of India". (I have been vocal about this several times here on the pump already, and got some support too).
- "Tigers by year": I think everyone should see the absurdity. Photos of tigers should be grouped by location (zoo, country) or by growth stage (juvenile, adult...), not whether they were photographed in 1998 vs. 2015.
- Location by year. Without exif/metadata, one photo of Taj Mahal looks like the other, with no telling that one was created 2008 and the other 2021. It makes much more sense to primarily sort them by architectural elements (main building, gates, interior...) than by year. Thankfully, this is done too, but not always. And sure, with some events even two years make a huge difference, like Verdun 1913 and Verdun 1915; but many "non-changing" locations are fine without a by-year split-up.
- "Person by year" is the OP topic already. Actually, in my opinion this makes mostly sense as long as MANY files exist. If there are just 25 files of a celebrity, please do NOT split that collection up into 12 by-year subcategories. Doing so is a case of well-intentioned obstruction, as access just gets harder with no further benefit.
- Much more could be said. So while I am cautiously supportive of several important use cases of by-year categories, atomization has to stop. --Enyavar (talk) 02:12, 24 January 2025 (UTC)
- I agree with the general issue brought up by SergeWoodzing. "By year" in many cases only makes sense if a large number of files cannot be meaningfully sorted in another way. Non-recurring events, certainly. But things that undergo little changes from year to year, do not necessarily need atomized categories, and I also noticed that by-year categories are steady on the rise since 2018. Not always for the better. The following examples are not what literally exists right now, but it would be easy to find real cases just like them.
Comment I did a proposal a few months ago to confine "by year" categories to images that show a meaningful distinction by year. For instance something like a yearly event where there's actually a difference between the years. Whereas, say images of tigers per Enyavar's example aren't worth organizing per year because there's no meaningful between a tiger in 2015 and one in 2016. Anyway, it seemed like there was general support for the proposal at the time.
- The problem is that there's no actual way to enforce it because people will ignore consensus, recreate categories, and attack anyone who disagrees with them. It's made worse by the fact that admins on here seem to have no will or ability to impose any kind of standards. They just cater to people doing things their own way regardless of consensus as long as the person throws a big enough tantrum about it. There's plenty of proposals, CfD, village pump and talk page discussions, Etc Etc. that should already regulate how these types of categories are used though. They just aren't ever imposed to any meaningful degree because of all the
limp wristedweak pandering to people who use Commons as their own personal project. --Adamant1 (talk) 06:47, 24 January 2025 (UTC)
- So should they ban you promptly for using a homophobic slur ("limp wristed"), or should they just let you continue going on your way ignoring consensus?--Prosfilaes (talk) 08:16, 24 January 2025 (UTC)
- @Prosfilaes: I didn't actually know it was a homophobic slur. I just thought it meant weak. I struck it out though. Thanks for letting me know. Not that I was saying anyone should banned for ignoring the consensus, but if people intentionally use homophobic slurs then yes they should be banned for it. With this though it's more about the bending over backwards to accommodate people who don't care about or follow the consensus then it is sanctioning anyone over it. people should just be ignored and the consensus should be followed anyway. There's no reason what-so-ever that it has to involve banning people. Just don't pander to people using Commons as their own personal project. It's not that difficult. --Adamant1 (talk) 09:28, 24 January 2025 (UTC)
- For this topic at least, I don't think I have seen actual attacks against other users, thankfully. SergeWoodzing has used some strong condemnations of the status quo in general on Commons, but I do not perceive his statement as an attack against some users. Now, Adamant points out the problem, which is that we seemingly don't have a guideline or even policy on which topics may be organized by year and which ones should rather not get a by-year categorization. I'm almost sure that people are creating by-year categories out of the best intentions, and mostly because they are boldly imitating the "best practice" of other users, ignorant of some consensus that may or may not have been formed among a dozen users in the village pump. Which means that such users have to be talked out of the idea individually once they start by-year categories for an unsuitable topic. --Enyavar (talk) 16:33, 24 January 2025 (UTC)
- There's certainly an aspect to this where people indiscriminately create by year categories because other people do. But it still comes down to a lack of will and/or mechanisms to enforce standards though. You can ask the person doing it to stop, but they can just ignore you and continue. Then what? No one is going to have repercussions for ignoring the consensus by continuing to create the categories. I've certainly never there be any and I've been involved in plenty of conversation about over categorization. The person usually just demagogues or outright ignores the issue and continues doing it. --Adamant1 (talk) 16:54, 24 January 2025 (UTC)
- For this topic at least, I don't think I have seen actual attacks against other users, thankfully. SergeWoodzing has used some strong condemnations of the status quo in general on Commons, but I do not perceive his statement as an attack against some users. Now, Adamant points out the problem, which is that we seemingly don't have a guideline or even policy on which topics may be organized by year and which ones should rather not get a by-year categorization. I'm almost sure that people are creating by-year categories out of the best intentions, and mostly because they are boldly imitating the "best practice" of other users, ignorant of some consensus that may or may not have been formed among a dozen users in the village pump. Which means that such users have to be talked out of the idea individually once they start by-year categories for an unsuitable topic. --Enyavar (talk) 16:33, 24 January 2025 (UTC)
- Tbf I didn’t really know it was either. I wouldn’t even call it a “slur”— more of a general insult with homophobic connotations, like “sissy” or “pansy”. Dronebogus (talk) 17:57, 27 January 2025 (UTC)
- @Prosfilaes: I didn't actually know it was a homophobic slur. I just thought it meant weak. I struck it out though. Thanks for letting me know. Not that I was saying anyone should banned for ignoring the consensus, but if people intentionally use homophobic slurs then yes they should be banned for it. With this though it's more about the bending over backwards to accommodate people who don't care about or follow the consensus then it is sanctioning anyone over it. people should just be ignored and the consensus should be followed anyway. There's no reason what-so-ever that it has to involve banning people. Just don't pander to people using Commons as their own personal project. It's not that difficult. --Adamant1 (talk) 09:28, 24 January 2025 (UTC)
- So should they ban you promptly for using a homophobic slur ("limp wristed"), or should they just let you continue going on your way ignoring consensus?--Prosfilaes (talk) 08:16, 24 January 2025 (UTC)
Support resolving concerns reported by user "SergeWoodzing". such "alternative overcategorization" by year, or even worse by state, makes useful files hard to find. Not for "Contemporary VIP:s" brewing gazillions of useless images, but for relevant topics, especially if the total number of files is low. Taylor 49 (talk) 00:18, 5 February 2025 (UTC)
Should a "bot cleanup kit" exist?
[edit]In the last 6 months, Commons has had 2 bots have extended issues where they created tens of thousands of invalid edits.
Both times, I cleaned up the mess with massrollback and either account creator or a bot account. But it's a less than ideal solution, as I was hitting 2 thousand EPM while performing rollbacks, and these were not marked as bot edits. So my question is:
Should we create a tool/script/playbook, for doing bot cleanups? I understand bot owners are responsible for the edits made by their bots, but having dedicated tools to rapidly handle 75 thousand rollbacks without causing 5 mins of database lag would be nice. I have been asked frequently is why this can't be done slowly? The problem is that if for any reason, an affected page is edited, any error introduced by the bot can't be fixed easily, often requiring manual correction. All the Best -- Chuck Talk 18:30, 24 January 2025 (UTC)
- We should require bot operators be able to cleanup mistakes they made with their bot. In the bot request they have to confirm that they can also revert the edits they made with the bot. If they can not guaranty this the bot can not be approved. GPSLeo (talk) 19:00, 24 January 2025 (UTC)
- What GPSLeo said. Krd 04:23, 25 January 2025 (UTC)
- @Krd, Seeing as you have handled most of the bot requests from the last 5 years, when does this check happen? And if a bot does mess up and make a bunch of junk edits, should we really be using the same bot to fix it? (unless we have a standardised script to do it, I don't think that's an option) All the Best -- Chuck Talk 04:50, 25 January 2025 (UTC)
- There is no check, but it speaks for itself that if a bot operator messes up in large scale, they are responsible to at least help to clean it up, whatever it takes. Everybody who is running a real bot is able to do that. Perhaps we should consider to be more hesitant on AWB or js gagdet "bots" in the future. In order to understand the actual size problem, perhaps the 2 mentioned cases should be analyzed regarding what exactly went wrong. Krd 05:24, 25 January 2025 (UTC)
- @AntiCompositeNumber was looking into the flickr goof last I heard. As to WLKbot, nothing went wrong per se , but rather the operator @Kim Bach jumped the gun on implementing something, and creating 3k categories that still need cleanup. Also, these were both full bots, not script/AWB bots. Their edits were fixed by a script. Also, @MolecularPilot updated the script, allowing for ratelimiting and marking bot edits (haven't tested the second part yet.) User:MolecularPilot/massrollback.js. Even if commons has our ducks in a row, once the tool exists and is documented, this can be used movement wide, having a much larger impact. All the Best -- Chuck Talk 21:42, 25 January 2025 (UTC)
- Hi! The new version of massrollback supports ratelimiting (you tell it the max number of rollbacks to make in a minute) but it doesn't support marking them as bot edits if you're flagged. I'm working on this part now! :) MolecularPilot (talk) 23:03, 25 January 2025 (UTC)
- Actually, it already did mark bot edits if your flagged, I forgot that I coded that part. So, yeah! :) MolecularPilot (talk) 08:42, 26 January 2025 (UTC)
- Hi! The new version of massrollback supports ratelimiting (you tell it the max number of rollbacks to make in a minute) but it doesn't support marking them as bot edits if you're flagged. I'm working on this part now! :) MolecularPilot (talk) 23:03, 25 January 2025 (UTC)
- @AntiCompositeNumber was looking into the flickr goof last I heard. As to WLKbot, nothing went wrong per se , but rather the operator @Kim Bach jumped the gun on implementing something, and creating 3k categories that still need cleanup. Also, these were both full bots, not script/AWB bots. Their edits were fixed by a script. Also, @MolecularPilot updated the script, allowing for ratelimiting and marking bot edits (haven't tested the second part yet.) User:MolecularPilot/massrollback.js. Even if commons has our ducks in a row, once the tool exists and is documented, this can be used movement wide, having a much larger impact. All the Best -- Chuck Talk 21:42, 25 January 2025 (UTC)
- There is no check, but it speaks for itself that if a bot operator messes up in large scale, they are responsible to at least help to clean it up, whatever it takes. Everybody who is running a real bot is able to do that. Perhaps we should consider to be more hesitant on AWB or js gagdet "bots" in the future. In order to understand the actual size problem, perhaps the 2 mentioned cases should be analyzed regarding what exactly went wrong. Krd 05:24, 25 January 2025 (UTC)
- @Krd, Seeing as you have handled most of the bot requests from the last 5 years, when does this check happen? And if a bot does mess up and make a bunch of junk edits, should we really be using the same bot to fix it? (unless we have a standardised script to do it, I don't think that's an option) All the Best -- Chuck Talk 04:50, 25 January 2025 (UTC)
- What GPSLeo said. Krd 04:23, 25 January 2025 (UTC)
- Is it sufficient to use the existing mw:Manual:Pywikibot/revertbot.py? If not, what is missing? whym (talk) 12:03, 1 February 2025 (UTC)
- Not all bots use pywikibot. And also, I'm not so sure a bot that just screwed up should be doing the cleanup. All the Best -- Chuck Talk 20:32, 1 February 2025 (UTC)
- With the script linked above, you can specify the target user account whose recent edits are to be reverted. The target account doesn't need to be a Pywikibot bot, nor even a bot. whym (talk) 01:53, 2 February 2025 (UTC)
- I didn't know that existed. I have user:chuckbot kicking around with pywikibot and a working backend, so I might do a bot request for that. All the Best -- Chuck Talk 01:57, 2 February 2025 (UTC)
- With the script linked above, you can specify the target user account whose recent edits are to be reverted. The target account doesn't need to be a Pywikibot bot, nor even a bot. whym (talk) 01:53, 2 February 2025 (UTC)
- Not all bots use pywikibot. And also, I'm not so sure a bot that just screwed up should be doing the cleanup. All the Best -- Chuck Talk 20:32, 1 February 2025 (UTC)
Expanding an explanation on the De-adminship policy
[edit]I would like to propose this text to expand the de-adminship policy, this text came as a result of this discussion on meta and also inspired by the current policies on English Wikipedia:
Administrators are expected to adhere strictly to the principles of respect and proper etiquette as outlined in the Universal Code of Conduct. Any administrator who repeatedly or egregiously violates these principles by engaging in disrespectful behavior, personal attacks, or actions that undermine the community's trust may have their administrative rights revoked. Administrators are accountable for their actions and must provide clear and prompt explanations when their conduct or decisions are questioned. Repeated failure to communicate, poor judgment, or misuse of administrative tools, as well as conduct incompatible with the responsibilities of the role, may result in sanctions. Such cases will be reviewed by a designated committee, which will evaluate the severity, frequency, and context of the violations, ensuring a fair and transparent process. Administrators must maintain proper account security, including using strong passwords and reporting any unauthorized access immediately, as compromised accounts may lead to immediate loss of administrative privileges. In cases where violations persist despite warnings or where the offenses are severe, the administrator’s rights may be permanently revoked to safeguard the integrity of the platform and its community. Reinstatement of administrative rights, if sought, will be subject to thorough evaluation by the committee, considering the administrator’s past actions, corrective measures taken, and the current trust of the community.
This aditional description in the current policy aims to uphold a respectful, accountable, and secure environment, ensuring that administrators act in alignment with the values and expectations of their role in our community Wilfredor (talk) 19:51, 24 January 2025 (UTC)
- I think the process as it is is perfectly fine but it can be improved for sure. Clarifying that personal attacks and disrespectful behaviour is not okay seems like a good step however common sense applies. Also your proposal mentions a committee whose creation you do not elaborate further. In any case I would oppose creating a committee and leave the decision of desysopping in the hands of the community (i.e. voting). I don't mean to condone wrong behaviours but one off disrespectful comment or attack could be pardoned, the desysopping process should go for reiterative bad behaviour only. Bedivere (talk) 21:42, 24 January 2025 (UTC)
- Also stating that "Administrators are expected to adhere strictly to the principles of respect and proper etiquette as outlined in the Universal Code of Conduct" is redundant since all users are expected to abide to that code of conduct, including admins for obvious reasons (it is literally verbatim) Bedivere (talk) 21:46, 24 January 2025 (UTC)
- We all should keep in mind that the UCOC is the BARE MINIMUM for acceptable behavior, and we should expect admins to have a much higher standard than UCOC. All the Best -- Chuck Talk 21:58, 24 January 2025 (UTC)
- Yeah, but the proposed wording is not stronger. I would not oppose a stronger wording at all (just fyi) Bedivere (talk) 22:14, 24 January 2025 (UTC)
- @Wilfredor, Would you be willing to add a sentance into the proposed text stating that UCOC is the bare minimum, and should be well above the UCOC at all times. All the Best -- Chuck Talk 22:18, 24 January 2025 (UTC)
- Unfortunately, this thing that seems to be obvious, I do not feel that it is being fulfilled. Wilfredor (talk) 22:48, 27 January 2025 (UTC)
- Common sense isn't that common. All the Best -- Chuck Talk 01:32, 28 January 2025 (UTC)
- Unfortunately, this thing that seems to be obvious, I do not feel that it is being fulfilled. Wilfredor (talk) 22:48, 27 January 2025 (UTC)
- @Wilfredor, Would you be willing to add a sentance into the proposed text stating that UCOC is the bare minimum, and should be well above the UCOC at all times. All the Best -- Chuck Talk 22:18, 24 January 2025 (UTC)
- It can be easily weaponized to serve personal agendas, though, so a good amount of good sense and discretion is strongly recommended when using it. Darwin Ahoy! 22:17, 24 January 2025 (UTC)
- That's right. All I would support is stronger-wording the current policy and saying that personal attacks, poor behaviour are not accepted. But as it is now, there should be a discussion (always local) before initiating a proper desysopping voting Bedivere (talk) 22:27, 24 January 2025 (UTC)
- Yeah, but the proposed wording is not stronger. I would not oppose a stronger wording at all (just fyi) Bedivere (talk) 22:14, 24 January 2025 (UTC)
- We all should keep in mind that the UCOC is the BARE MINIMUM for acceptable behavior, and we should expect admins to have a much higher standard than UCOC. All the Best -- Chuck Talk 21:58, 24 January 2025 (UTC)
- Also stating that "Administrators are expected to adhere strictly to the principles of respect and proper etiquette as outlined in the Universal Code of Conduct" is redundant since all users are expected to abide to that code of conduct, including admins for obvious reasons (it is literally verbatim) Bedivere (talk) 21:46, 24 January 2025 (UTC)
Make Commons:Civility, Commons:Harassment and Commons:No personal attacks a policy
[edit]- @The Squirrel Conspiracy Do you think it is good to close this that fast? There are many comments mentioning that there is some need to adapt the pages for Commons. If they are now a policy every not very minor change would require separate community confirmation. GPSLeo (talk) 08:45, 1 February 2025 (UTC)
- That's fair, but if the proposed changes are uncontroversial, it should be easy to get a consensus for them, and if they are controversial, they shouldn't be in the policies in the first place.
- Candidly, I'd like to think that after 15 years, I have a good sense for what does and doesn't get done on this project, and I suspect that no one is going to step up and rewrite the policies regardless of whether the discussion stays open for two weeks or not. Happy to be proven wrong, but there are lots of gaps in Commons' bureaucracy and infrastructure that have never been fixed.
- If you want to revert the close, go ahead though. The Squirrel Conspiracy (talk) 10:34, 1 February 2025 (UTC)
- As for the procedure, I would at least wait until one weekend is over, to include people who only find volunteer time in weekends. While I agree with the observation that there have been policy gaps for a long time, but I think the long time span also means that it wouldn't hurt to spend a few more weeks, or at the very least, the proposed 2 weeks. Using Template:Centralized_discussion or even MediaWiki:Sitenotice wouldn't be unreasonable for this, consdiering most users don't frequent to COM:VPP, but are affected. Sitenotice might be more suited for the final decision, though. whym (talk) 02:03, 2 February 2025 (UTC)
- @GPSLeo: Do you think it was premature to promote those pages? I do. Whether we should un-promote them might be a different matter (unless The Squirrel Conspiracy voluntarily undo the changes), though. One remedy might be to recognize that they have community consensus at a general level, and that they are adapted policies and might still have rough edges in specifics. --whym (talk) 10:32, 7 February 2025 (UTC)
- I think is was to early but I would just keep the discussions going in the current way and in some weeks make a final vote to get clear consensus on the final versions. GPSLeo (talk) 18:17, 14 February 2025 (UTC)
- I think Commons:No personal attacks talks too much about articles and article talk pages, which we don't have in general. And I don't know what would be the Commons counterpart to regular disagreements on Wikipedia talk pages. whym (talk) 11:58, 1 February 2025 (UTC)
- See also an ongoing discussion in Commons_talk:Civility#Pre-policy_debates (started a few days ago). whym (talk) 02:18, 2 February 2025 (UTC)
- Similarly, I've started a discussion about making our new outing rules more Commons-specific over on Commons talk:Harassment. --bjh21 (talk) 09:40, 5 February 2025 (UTC)
- In the past before these stringent rules even became "policy" I already saw at least weekly users clashing because of linguistic and cultural misunderstandings.
- How many sysops remember to give the benefit of the doubt before wielding this weapon against minority users?
- LOL.--RoyZuo (talk) 14:02, 5 February 2025 (UTC)
@The Squirrel Conspiracy and Matrix: FYI, the promotion of Commons:Harassment to policy has been undone. Nosferattus (talk) 14:22, 6 February 2025 (UTC)
- To be clear, it was undone by Matrix. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:37, 6 February 2025 (UTC)
- Sorry, I didn't see this discussion, I reverted my reversion. —Matrix(!) ping onewhen replying {user - talk? -
uselesscontributions} 18:46, 6 February 2025 (UTC)- I think this indicate that we could have advertised more before voting: putting a notice on the proposed policy page and its talk page. (Not the fault of the proposer - they specifically said voting was to be done later, not immediately.) whym (talk) 11:39, 7 February 2025 (UTC)
- Is there some way we could specifically mark these as draft policy? - Jmabel ! talk 21:15, 7 February 2025 (UTC)
- There's {{Proposed}} which is close and I think includes what you want. —Justin (koavf)❤T☮C☺M☯ 01:11, 8 February 2025 (UTC)
- But does nothing to suggest that it is a largely agreed-upon draft, and we are just hammering out details. - Jmabel ! talk 04:21, 8 February 2025 (UTC)
- There's {{Proposed}} which is close and I think includes what you want. —Justin (koavf)❤T☮C☺M☯ 01:11, 8 February 2025 (UTC)
Expanding Template:Official Doctor Who YouTube channel and Template:Official Star Wars Flickr stream
[edit]I would like {{Official Doctor Who YouTube channel}} and {{Official Star Wars Flickr stream}} to be expanded to allow the tagging of content from other official Flickr accounts and YouTube channels. Examples of official YouTube channels which could be added include Bravo, Cartoon Network India, FOX Sports, Harry Potter, HBO, MTV Shores, MTV UK, MSNBC, NBC News, NickRewind, Nicktoons, Prime Video AU & NZ, Rooster Teeth Animation, Warner Bros. Games and Warner Music New Zealand. Aside from the aforementioned Star Wars account, I am unable to think of any official Flickr accounts that even seldom publish their uploads under a free license (please let me know of any that do). Thoughts? JohnCWiesenthal (talk) 21:56, 10 February 2025 (UTC)
Possibly easier way to contact local oversighters via Wikimail
[edit]Hello,
I had recently the need to contact our Commons oversighters. I knew from my home wiki, DE-WP, that an option to contact the German oversighting team through Wikimail, using de:User:Oversight-Email, is offered. I was a bit disappointed by that Commons is a notable exception of projects where such an easy-access way is not implemented (it may be the largest project where this is not given, according to the description on de:Benutzer:TenWhile6/OSRequester). Could a Wikimail way of oversight contact be enacted, parallel to the existing way of the mailing list? Regards, Grand-Duc (talk) 12:08, 14 February 2025 (UTC)
Support Major issue with commons oversight. That and adding a T&S role account for wikimail. I think they attached the emergency account, but I'm unsure. All the Best -- Chuck Talk 17:55, 14 February 2025 (UTC)
- I do not think that it is needed to implement this now as we will soon (I think before the end of 2025) have the new meta:Incident Reporting System made exactly to solve this problem. GPSLeo (talk) 18:15, 14 February 2025 (UTC)
- "Soon" and "Before the end of 2025" is somewhat contradictory! Worst case, that's more than 9 months in the future. Can somebody knowledgeable please inform me and whoever is interested on the amount of work needed to setup such a "contact user account"? The lesser the effort, the sooner it should come, even if it is only for a limited amount of use time. It's kinda the same reasoning as a military who refurbishes a warship, only to put it into reserve status a few months later (like it was done on the carrier USS Franklin (CV-13) or several other US Navy ships after V-J day). The use case is clearly shown, the potential replacement functionality has no clear ETA given yet, so waiting for it is non sensible. Regards, Grand-Duc (talk) 19:01, 14 February 2025 (UTC)
- The problem with an account for this is the question who has access to the account. It would need to be all oversighters that they are able to check each other but everyone who is able to access the account would also be able to change to password and exclude everyone else. Additionally the account should definitely use 2FA what makes it hard to be accessible for all oversighters. The worst scenario would be that someone with access to the account changes the email address unnoticed to fish reports. We could decide on one oversighter who owns this account but in the case of problems (loss of rights and not handing over the account or lost contact/dead) we would need to figure out a solution to regain access through the WMF MediaWiki operations team. Because of the potential serious trouble I think we can keep it as we did for more than one decade or one more year until we have a much better solution. GPSLeo (talk) 19:45, 14 February 2025 (UTC)
- "Potential serious trouble"? Do you hint to that people who sign a confidentiality agreement and identify themselves in front of the site operator would regularly go postal and make nasty trouble, breaching privacy for whatever reason? What's the base for the whole adminship, checkusership and trust in licensing, then? Pinging user:Ra'ike and user:Raymond, the first as OS on DE-WP and the second as local OS: do you have any insight on how the German contact user is set up and if a similar thing is also suitable here and now? Regards, Grand-Duc (talk) 21:14, 14 February 2025 (UTC)
- They don't need account access, they just need the email access, someone from the WMF can have the password. this account only needs to forward all emails sent to it from commons to the oversighter's mailing list. En-wiki can do it for 4 different role accounts, we can do 1. All the Best -- Chuck Talk 22:44, 14 February 2025 (UTC)
- It is not possible to have different mail addresses for wikimail and password reset. Therefore the only thing that could be handled by the WMF could be the second factor. But then setting up the account and logging in would be very complicated. GPSLeo (talk) 05:39, 15 February 2025 (UTC)
- @Grand-Duc Some thought from me as Commons OS. Not discussed with the OS colleagues. First I do not see a big advantage of such a Wikimail: One click to open the Wikimail form and one click on the current e-mail-address to open the mail program. Anyway. Technically it is easy: Creation of the OS user account on Commons with the current mailing list email-adress in the preference. All mail will be forwarded to our mailinglist (not moderated!) and can be handled as usually. One caveat: we do not have a safe place to storing the password. Any of us can have it, of course, but when oversighters change, there is no guarantee that the password will be passed on.
- I will ask my OS colleagues today. Raymond (talk) 08:49, 15 February 2025 (UTC)
- Couldn't the foundation have the password? All the Best -- Chuck Talk 23:46, 15 February 2025 (UTC)
- They don't need account access, they just need the email access, someone from the WMF can have the password. this account only needs to forward all emails sent to it from commons to the oversighter's mailing list. En-wiki can do it for 4 different role accounts, we can do 1. All the Best -- Chuck Talk 22:44, 14 February 2025 (UTC)
- "Potential serious trouble"? Do you hint to that people who sign a confidentiality agreement and identify themselves in front of the site operator would regularly go postal and make nasty trouble, breaching privacy for whatever reason? What's the base for the whole adminship, checkusership and trust in licensing, then? Pinging user:Ra'ike and user:Raymond, the first as OS on DE-WP and the second as local OS: do you have any insight on how the German contact user is set up and if a similar thing is also suitable here and now? Regards, Grand-Duc (talk) 21:14, 14 February 2025 (UTC)
- The problem with an account for this is the question who has access to the account. It would need to be all oversighters that they are able to check each other but everyone who is able to access the account would also be able to change to password and exclude everyone else. Additionally the account should definitely use 2FA what makes it hard to be accessible for all oversighters. The worst scenario would be that someone with access to the account changes the email address unnoticed to fish reports. We could decide on one oversighter who owns this account but in the case of problems (loss of rights and not handing over the account or lost contact/dead) we would need to figure out a solution to regain access through the WMF MediaWiki operations team. Because of the potential serious trouble I think we can keep it as we did for more than one decade or one more year until we have a much better solution. GPSLeo (talk) 19:45, 14 February 2025 (UTC)
- "Soon" and "Before the end of 2025" is somewhat contradictory! Worst case, that's more than 9 months in the future. Can somebody knowledgeable please inform me and whoever is interested on the amount of work needed to setup such a "contact user account"? The lesser the effort, the sooner it should come, even if it is only for a limited amount of use time. It's kinda the same reasoning as a military who refurbishes a warship, only to put it into reserve status a few months later (like it was done on the carrier USS Franklin (CV-13) or several other US Navy ships after V-J day). The use case is clearly shown, the potential replacement functionality has no clear ETA given yet, so waiting for it is non sensible. Regards, Grand-Duc (talk) 19:01, 14 February 2025 (UTC)
Support unless this is somehow tremendously more difficult than I can imagine it to be. - Jmabel ! talk 19:44, 14 February 2025 (UTC)
Support --Adamant1 (talk) 03:46, 15 February 2025 (UTC)