Commons:Village pump/Proposals
This page is used for proposals relating to the operations, technical issues, and policies of Wikimedia Commons; it is distinguished from the main Village pump, which handles community-wide discussion of all kinds. The page may also be used to advertise significant discussions taking place elsewhere, such as on the talk page of a Commons policy. Recent sections with no replies for 30 days and sections tagged with {{Section resolved|1=--~~~~}} may be archived; for old discussions, see the archives; the latest archive is Commons:Village pump/Proposals/Archive/2024/12.
- One of Wikimedia Commons’ basic principles is: "Only free content is allowed." Please do not ask why unfree material is not allowed on Wikimedia Commons or suggest that allowing it would be a good thing.
- Have you read the FAQ?
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 5 days and sections whose most recent comment is older than 30 days. | |
CAPTCHA for many IP edits
[edit]There is a new feature that allows AbuseFilters to require a CAPTCHA before uploading an edit. I would like to enable this for many IP edits, especially IP edits on mobile. The aim of this is to reduce the huge amount of accidental and nonsense edits. Are there any concerns against this? GPSLeo (talk) 10:06, 18 August 2024 (UTC)
- No, it would be good to reduce maintenance time to free up time for other tasks. However, I doubt this is enough and have called for better vandalism/nonsense-edit detection like ClueBot does it on Wikipedia here which may also be some context for this thread. Prototyperspective (talk) 10:25, 18 August 2024 (UTC)
- Detection of nonsense after is was published is not our problem, this is possible with current filters. We do not have enough people looking on the filter hits and reverting the vandalism. We therefore need measures to reduce such edits. If we do not find a way to handle this we need to block IP edits entirely. GPSLeo (talk) 10:56, 18 August 2024 (UTC)
- I think we rather need measure to automatically revert such edits. Detection is a problem if it's not accurate enough that it contains too many false-positives that people don't implement them. The proposal is not just about detection but also about what follows from there – for example one could also automatically revert them but add the edit to a queue to check in case the revert is unwarranted. Prototyperspective (talk) 11:00, 18 August 2024 (UTC)
- We might to want to experiment mw:Moderator Tools/Automoderator. It won't probably work perfectly at a first experiment, but we will at least know some indication of where it works and where it doesn't. whym (talk) 01:18, 1 September 2024 (UTC)
- Very interesting! Thanks for the link, it's very constructive and if possible please let me know when WMC enables this or when there is some discussion about enabling it.
It could save people a lot of time and keep content here more reliable/higher quality. I think there's not even auto-detection for when e.g. 80% of a user's edit have been reverted for checking the remainder and whether further action is due. Prototyperspective (talk) 23:18, 1 September 2024 (UTC)
- Very interesting! Thanks for the link, it's very constructive and if possible please let me know when WMC enables this or when there is some discussion about enabling it.
- I think we rather need measure to automatically revert such edits Absolutely yes. I think the risk of losing well-intentioned IP edits in Commons is quite low (for example, I had edited Wikipedia as an IP user many times before I registered, but I've never thought of editing Commons as an IP user). MGeog2022 (talk) 21:27, 26 September 2024 (UTC)
- We might to want to experiment mw:Moderator Tools/Automoderator. It won't probably work perfectly at a first experiment, but we will at least know some indication of where it works and where it doesn't. whym (talk) 01:18, 1 September 2024 (UTC)
- I think we rather need measure to automatically revert such edits. Detection is a problem if it's not accurate enough that it contains too many false-positives that people don't implement them. The proposal is not just about detection but also about what follows from there – for example one could also automatically revert them but add the edit to a queue to check in case the revert is unwarranted. Prototyperspective (talk) 11:00, 18 August 2024 (UTC)
- Detection of nonsense after is was published is not our problem, this is possible with current filters. We do not have enough people looking on the filter hits and reverting the vandalism. We therefore need measures to reduce such edits. If we do not find a way to handle this we need to block IP edits entirely. GPSLeo (talk) 10:56, 18 August 2024 (UTC)
- Capchas are supposed to stop robots from spamming, right? Why would this stop some random human IP user from posting “amogus sussy balls”? Dronebogus (talk) 14:05, 18 August 2024 (UTC)
- Seconding this. CAPTCHAs should only be used to prevent automated edits, not as a means of "hazing" users making low-effort manual edits. Omphalographer (talk) 20:12, 18 August 2024 (UTC)
- Maybe candidates could be edits that are currently fully blocked but have some false positives that could be let through?
∞∞ Enhancing999 (talk) 13:59, 27 August 2024 (UTC)
- Maybe candidates could be edits that are currently fully blocked but have some false positives that could be let through?
- You did not consider the full rationale of OP, he wrote
huge amount of accidental […] edits
and this measure would be partly and probably mainly be about greatly reducing accidental edits. OP however failed to name some concrete specific examples which have been brought up in a thread elsewhere. I favor better detection of nonsense edits and automatic reverting of them but requiring captchas for IP edits on mobile or for certain actions may still be worth testing for a month or two to see whether it actually reduces these kinds of edits. Prototyperspective (talk) 16:43, 27 August 2024 (UTC)- I'd totally support requiring captcha's for edits on mobile in general, not just for IP addresses. I know personally I make a lot of editing mistakes on mobile just because of how clanky the interface is. There's also been plenty of instances where I've seen pretty well established users forgot to sign their names or make other basic mistakes on mobile. So I think having captcha's on mobile for everyone would be a really good idea. --Adamant1 (talk) 17:59, 27 August 2024 (UTC)
- In Special:Preferences there is an option "Prompt me when entering a blank edit summary (or the default undo summary)". Enabling this seems like a good way to provide a chance to briefly stop and review what you are trying to do. I wonder if it's possible to enable it by default. A captcha answer has no productive value, but a good edit summary will do. whym (talk) 01:15, 1 September 2024 (UTC)
- I'd support that as long as there's a way for normal, logged in users to disable it if they want to. I think any kind of buffer between making an edit and posting it would reduce bad edits though. Even ones that are clearly trolling. A lot of people won't waste their time if they have to take an extra step to post a message even if it's something like writing an edit summary. --Adamant1 (talk) 01:34, 1 September 2024 (UTC)
- There is something to be said for en:WP:PBAGDSWCBY and en:WP:ROPE (I know, we don't ban here, just substitute indef for ban). — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 02:01, 1 September 2024 (UTC)
- @Jeff G.: True. That's one of the main reasons I support requiring people to have an account since it seems to be much easier to track and ban editors that way. --Adamant1 (talk) 02:05, 1 September 2024 (UTC)
- @Adamant1: Like it or not, we still have "anyone can contribute" right on the main page. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 02:17, 1 September 2024 (UTC)
- @Jeff G.: Anyone can still contribute if we require accounts. I could see not requiring accounts if there was legitimate reason for it, but I've put a lot of thought into this over the last couple of years and can't think of one single legitimate reason why someone wouldn't be able to create one. We'll have to agree to disagree though. I can understand why they let IP edit Wikiprojects back in the day though, but the internet and people are just different now and the project should be able to adapt to the times. --Adamant1 (talk) 02:21, 1 September 2024 (UTC)
- @Adamant1: Like it or not, we still have "anyone can contribute" right on the main page. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 02:17, 1 September 2024 (UTC)
- @Jeff G.: True. That's one of the main reasons I support requiring people to have an account since it seems to be much easier to track and ban editors that way. --Adamant1 (talk) 02:05, 1 September 2024 (UTC)
- There is something to be said for en:WP:PBAGDSWCBY and en:WP:ROPE (I know, we don't ban here, just substitute indef for ban). — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 02:01, 1 September 2024 (UTC)
- This does not help in cases this is about as theses types of edits always have auto generated edit summaries and no way to edit the edit summary. GPSLeo (talk) 04:32, 1 September 2024 (UTC)
- Maybe that is a software problem to be fixed? It already says "(or the default undo summary)" after all. Reminding users to add a bit more to what's auto-generated seems like a natural extension. whym (talk) 18:54, 1 September 2024 (UTC)
- The Wikibase UI does not have such a feature and in the many years of Wikidata it was not considered a problem that changing the edit summary is not possible. GPSLeo (talk) 20:24, 1 September 2024 (UTC)
- Can Commons customize that in their Wikibase instance? It's not yet implemented in the Wikidata UI, but on the API level Wikibase supports edit summaries according to d:Help:Edit summary. whym (talk) 23:38, 1 September 2024 (UTC)
- The Wikibase UI does not have such a feature and in the many years of Wikidata it was not considered a problem that changing the edit summary is not possible. GPSLeo (talk) 20:24, 1 September 2024 (UTC)
- Maybe that is a software problem to be fixed? It already says "(or the default undo summary)" after all. Reminding users to add a bit more to what's auto-generated seems like a natural extension. whym (talk) 18:54, 1 September 2024 (UTC)
- I'd support that as long as there's a way for normal, logged in users to disable it if they want to. I think any kind of buffer between making an edit and posting it would reduce bad edits though. Even ones that are clearly trolling. A lot of people won't waste their time if they have to take an extra step to post a message even if it's something like writing an edit summary. --Adamant1 (talk) 01:34, 1 September 2024 (UTC)
- I make much fewer editing mistakes on mobile when I use my new portable bluetooth mini keyboard. Touch-typing in the dark, however, can still be problematic. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 01:52, 1 September 2024 (UTC)
- In Special:Preferences there is an option "Prompt me when entering a blank edit summary (or the default undo summary)". Enabling this seems like a good way to provide a chance to briefly stop and review what you are trying to do. I wonder if it's possible to enable it by default. A captcha answer has no productive value, but a good edit summary will do. whym (talk) 01:15, 1 September 2024 (UTC)
- I'd totally support requiring captcha's for edits on mobile in general, not just for IP addresses. I know personally I make a lot of editing mistakes on mobile just because of how clanky the interface is. There's also been plenty of instances where I've seen pretty well established users forgot to sign their names or make other basic mistakes on mobile. So I think having captcha's on mobile for everyone would be a really good idea. --Adamant1 (talk) 17:59, 27 August 2024 (UTC)
- Seconding this. CAPTCHAs should only be used to prevent automated edits, not as a means of "hazing" users making low-effort manual edits. Omphalographer (talk) 20:12, 18 August 2024 (UTC)
One week test
[edit]There is definitely no consensus to use this feature for now but there were some people suggesting to make a test. Therefore I would propose that we make a one week test and then evaluate the results. GPSLeo (talk) 19:37, 2 September 2024 (UTC)
- Why? How is that useful? No consensus to implement means no consensus to implement, period. I can guarantee it will not gain any more consensus with a test version. Dronebogus (talk) 21:08, 2 September 2024 (UTC)
- There were some people suggesting to make a test. There is also no consensus against some kind of measure. GPSLeo (talk) 14:11, 3 September 2024 (UTC)
- There is also no consensus for it. I feel like you’re just projecting whatever you like onto the discussion to make sure your proposal gets through somehow. It sucks when people don’t like your idea, but “seeing” consensus where none exists is not the way to fix that Dronebogus (talk) 19:54, 3 September 2024 (UTC)
- There were some people suggesting to make a test. There is also no consensus against some kind of measure. GPSLeo (talk) 14:11, 3 September 2024 (UTC)
- Oppose. As I noted above, this is not an appropriate use of CAPTCHAs - their purpose is to prevent automated edits by unauthorized bots, not to prevent "accidental or nonsense edits". Omphalographer (talk) 20:42, 3 September 2024 (UTC)
Simple edit confirmation
[edit]Instead of a CAPTCHA it is also possible to show a warning and require the user to confirm their edit. I would propose to make a one week test where we show IPs a warning "You are publicly editing the content of the page." and they have to hit the publish button again but with no CAPTCHA. GPSLeo (talk) 15:59, 26 September 2024 (UTC)
- Support Makes more sense. I think it's worth giving that a try but one week is short so somebody would need to have a good way of tracking relevant changes and creating some stats to see whether it's been effective. Or are there any better ideas what to do about Unregistered or new users often moving captions to other languages? Prototyperspective (talk) 20:58, 26 September 2024 (UTC)
- Support A month is probably better though. --Adamant1 (talk) 21:05, 26 September 2024 (UTC)
- Info @Prototyperspective and Adamant1: I made a draft for the message shown by the filter: MediaWiki:Abusefilter-warning-anon-edit. GPSLeo (talk) 09:29, 13 October 2024 (UTC)
- link to Commons:Project scope directly.
- put "If you are sure that your edit is constructive to this page please confirm it again." as last paragraph.
- We also recommend you to create an account, which allows you to upload your own photos or other free licensed content.
- RoyZuo (talk) 12:00, 18 October 2024 (UTC)
- I suggest the warning message includes a link to a place where users can give feedback (complain) so that we might see how many users are affected; and the test period be 1 month. 1 week is too short. collect stats over the period as often as possible (daily?). RoyZuo (talk) 06:34, 18 October 2024 (UTC)
- For the monitoring I created a tool [1]. The feedback is a problem as we had to protect all regular feedback pages due to massive vandalism and I think if we create a new page for this we would have to monitor it 24/7 and massively revert vandalism. GPSLeo (talk) 08:27, 18 October 2024 (UTC)
- Interesting tool. Please correct the typo in the page title and add a link to some wikipage about it (where is the software code, where to report issues or discuss it, is it CCBY). It does show edits that were later reverted in the charts and not edits that are reverts by date right? Prototyperspective (talk) 11:03, 18 October 2024 (UTC)
- I created a documentation page for the tool Commons:Revert and patrol monitoring. GPSLeo (talk) 11:01, 19 October 2024 (UTC)
- Can you keep the data from start to now, instead of only 1 month? RoyZuo (talk) 18:16, 20 October 2024 (UTC)
- The problem is that all edits become marked as patrolled after 30 days. It would be possible to also check the patrol log to get this data but it would be a bit complicated and would require to much API request for daily updates. For edit counts and revert counts it is not such a huge problem but also a bit problematic when requesting data for a hole year every day as edits might get reverted after a year. GPSLeo (talk) 18:40, 20 October 2024 (UTC)
- You could just keep the data as it was right before all edits become patrolled, right? RoyZuo (talk) 14:00, 23 October 2024 (UTC)
- When i first looked at the table data was starting from 2024-09-17. now you can keep instead of erasing data that "expire" after 1 month. RoyZuo (talk) 14:04, 23 October 2024 (UTC)
- You mean just keeping the row in the table without updating the number of reverted edits? GPSLeo (talk) 14:43, 23 October 2024 (UTC)
- From now on I will keep the data from the last day without updating it. In two month I will then need to update the design of the page, maybe I make a sub page for the archive data. GPSLeo (talk) 16:04, 25 October 2024 (UTC)
- You mean just keeping the row in the table without updating the number of reverted edits? GPSLeo (talk) 14:43, 23 October 2024 (UTC)
- The problem is that all edits become marked as patrolled after 30 days. It would be possible to also check the patrol log to get this data but it would be a bit complicated and would require to much API request for daily updates. For edit counts and revert counts it is not such a huge problem but also a bit problematic when requesting data for a hole year every day as edits might get reverted after a year. GPSLeo (talk) 18:40, 20 October 2024 (UTC)
- Can you keep the data from start to now, instead of only 1 month? RoyZuo (talk) 18:16, 20 October 2024 (UTC)
- I created a documentation page for the tool Commons:Revert and patrol monitoring. GPSLeo (talk) 11:01, 19 October 2024 (UTC)
- The feedback page is about this specific measure (double confirmation). it can be temporary, so that any existing users have a central page to complain. imagine if you always edit without login, suddenly this double confirmation kicks in and you get frustrated. you want to complain, but dont know where. so if we have a link for them to write something, and if anyone of them bothers to do, we can see how many are affected and why, etc.
- once the measure becomes permanent, users should just take it as it is; no point in complaining. RoyZuo (talk) 11:51, 18 October 2024 (UTC)
- We can give it a try. GPSLeo (talk) 10:30, 19 October 2024 (UTC)
- I just added the regular abuse filter error reporting link with a different text. GPSLeo (talk) 16:01, 25 October 2024 (UTC)
- We can give it a try. GPSLeo (talk) 10:30, 19 October 2024 (UTC)
- Interesting tool. Please correct the typo in the page title and add a link to some wikipage about it (where is the software code, where to report issues or discuss it, is it CCBY). It does show edits that were later reverted in the charts and not edits that are reverts by date right? Prototyperspective (talk) 11:03, 18 October 2024 (UTC)
- For the monitoring I created a tool [1]. The feedback is a problem as we had to protect all regular feedback pages due to massive vandalism and I think if we create a new page for this we would have to monitor it 24/7 and massively revert vandalism. GPSLeo (talk) 08:27, 18 October 2024 (UTC)
- If there are no concerns I would enable this for a first test. GPSLeo (talk) 06:35, 26 October 2024 (UTC)
- I just enabled this for first test run. GPSLeo (talk) 13:19, 27 October 2024 (UTC)
- I keep hitting this filter, every time I try to edit a page I keep getting this warning, which is odd. Also, we should track anonymous edits to see if this warning actually works or not, because it might just end up annoying only the productive people and not the real, actual vandals. -- Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 16:31, 27 October 2024 (UTC)
- I just enabled this for first test run. GPSLeo (talk) 13:19, 27 October 2024 (UTC)
Almost everyone hit with this filter is a registered users with a Wikimedia SUL account, it's I barely see any unregistered users being warned at all... --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 16:34, 27 October 2024 (UTC)
- Fixed now. I accidentally had an OR and not an AND between anon and mobile condition. GPSLeo (talk) 16:57, 27 October 2024 (UTC)
- Do you think you could make it work better for "new external links" edits? Every time those happen (often when I'm NOT attempting to add an external link), I have to put in a different CAPTCHA twice for the same edit. 2603:3021:3A96:0:4E4:95C6:BC81:D2E1 21:31, 28 October 2024 (UTC)
- If you get a CAPTCHA you triggered another anti spam mechanism that has nothing to do with AbuseFilters. GPSLeo (talk) 22:08, 28 October 2024 (UTC)
- How come this Abuse Filter seems to only happen for mobile devices? 2600:1003:B4C7:465D:0:2C:1E2C:4101 22:07, 21 December 2024 (UTC)
- @GPSLeo. RoyZuo (talk) 12:36, 5 January 2025 (UTC)
- At mobile edits from anon users 5-20% of the edits are reverted. For non mobile anon edits it is about 2-5%. GPSLeo (talk) 12:50, 5 January 2025 (UTC)
- @GPSLeo. RoyZuo (talk) 12:36, 5 January 2025 (UTC)
- How come this Abuse Filter seems to only happen for mobile devices? 2600:1003:B4C7:465D:0:2C:1E2C:4101 22:07, 21 December 2024 (UTC)
- If you get a CAPTCHA you triggered another anti spam mechanism that has nothing to do with AbuseFilters. GPSLeo (talk) 22:08, 28 October 2024 (UTC)
- Do you think you could make it work better for "new external links" edits? Every time those happen (often when I'm NOT attempting to add an external link), I have to put in a different CAPTCHA twice for the same edit. 2603:3021:3A96:0:4E4:95C6:BC81:D2E1 21:31, 28 October 2024 (UTC)
First results
[edit]After one week I looked at the share of reverted edits compared to all edits and it shows a huge decrease of reverted edits while the number of edits only decreased slightly [2]. When looking at the filter hits it also shows that many nonsense edits were not sent while most useful edits were confirmed. GPSLeo (talk) 08:05, 3 November 2024 (UTC)
- @GPSLeo from when to when was the filter live? RoyZuo (talk) 18:09, 14 November 2024 (UTC)
- The filter with current settings was activated on 16:55, 27 October 2024 and is still active. As there were no complaints I would just leave it on as it seems to have at least a small positive effect. GPSLeo (talk) 19:47, 14 November 2024 (UTC)
- The graphs dont look much different before or after 27 October. ip users just happily keep on editing by tapping twice? if true that sounds like very stubborn ip users. RoyZuo (talk) 20:49, 24 November 2024 (UTC)
- Oh, I don't know...Do any of your graphs show vandalism? If most of those edits after October 27 aren't vandalism, I wouldn't call them "very stubborn IP users". (Although I might call some of them helpful.) 2603:3021:3A96:0:2101:3992:8630:68C6 14:24, 16 December 2024 (UTC)
- It doesnt imply the users are vandalising. I meant if I were a user running into the new double confirmation for every edit I would soon give up editing. RoyZuo (talk) 12:36, 5 January 2025 (UTC)
- Maybe they're taking more care to proofread their own writing than you do? I mean, you seemingly forgot to add apostrophes in "don't" & "doesn't" and you didn't capitalize "IP". Not that I'm a Grammar Nazi (I wouldn't hold those grammatical errors against you). But, maybe the IP users are being a little more meticulous than you are in your edits and/or using something like AutoComplete to type. 2603:3021:3A96:0:407D:9112:AB5D:B4D7 16:01, 7 January 2025 (UTC)
- It doesnt imply the users are vandalising. I meant if I were a user running into the new double confirmation for every edit I would soon give up editing. RoyZuo (talk) 12:36, 5 January 2025 (UTC)
- Oh, I don't know...Do any of your graphs show vandalism? If most of those edits after October 27 aren't vandalism, I wouldn't call them "very stubborn IP users". (Although I might call some of them helpful.) 2603:3021:3A96:0:2101:3992:8630:68C6 14:24, 16 December 2024 (UTC)
- The graphs dont look much different before or after 27 October. ip users just happily keep on editing by tapping twice? if true that sounds like very stubborn ip users. RoyZuo (talk) 20:49, 24 November 2024 (UTC)
- The filter with current settings was activated on 16:55, 27 October 2024 and is still active. As there were no complaints I would just leave it on as it seems to have at least a small positive effect. GPSLeo (talk) 19:47, 14 November 2024 (UTC)
Deletor user group proposal
[edit]The community's consensus reached to reject this request.--Kadı Message 00:07, 26 December 2024 (UTC)
- The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Hi there! I believe we need the "Deletor" user group. This hypothetical user group only focus on immediate deletions. Not copyvios nor other speedy deletions(such as F10, G10). I mean this is for the files about "blatant and direct violations of Terms&Use". These kind of images needs to deleted in minimum needed time. After immediate deletion, a script or user himself or a bot will notify administrators; maybe in separate dedicated page or in deletion requests page. This is for to prevent abuse of this system.
You would say "Admins already take care of these images". But my point here is minimize the duration of stay on backlog. We can and should lower that time.
So... what should we do? modern_primat ඞඞඞ ----TALK 18:54, 17 December 2024 (UTC)
- This is simply a nonsense request. Contact an active admin and get it done. ─ Aafī on Mobile (talk) 18:56, 17 December 2024 (UTC)
- @Modern primat Which sorts of files could the "Deletor" user group delete then? If it's solely for CSAM, then I disagree that this is needed: report to legal-reports, flag down an admin on IRC/Discord/Telegram/etc. (making sure not to do anything that could draw attention to the file), done. If not, what exactly would be allowed to be deleted by this group? —Mdaniels5757 (talk • contribs) 19:20, 17 December 2024 (UTC)
- it is for CSAM. modern_primat ඞඞඞ ----TALK 19:30, 17 December 2024 (UTC)
- Administrators are not generally the right people to deal with CSAM. Among other things, we can only do a soft deletion. Of course, if we see it, we delete it and then report it for hard-deletion through the same channels anyone else would use to report it, but this is a job for the people at legal-reports; it is certainly not a job for Commons users we would not even trust to be admins. - Jmabel ! talk 05:11, 18 December 2024 (UTC)
- it is for CSAM. modern_primat ඞඞඞ ----TALK 19:30, 17 December 2024 (UTC)
- This was discussed multiple times in the last years with a clear consensus against such a user group. There are technical and legal problems with such a user group and there are no people trustworthy enough for such a right but not trustworthy for admin rights. The only feature that might be supported by most would be something like a G7 deletion functionality for trustworthy users like patrollers. GPSLeo (talk) 19:25, 17 December 2024 (UTC)
- Oppose If you are trusted enough to be able to make deletions on this site, apply directly for sysop. This is just redundant. --SHB2000 (talk) 20:39, 17 December 2024 (UTC)
- this user group just for extraordinary situations. not regular. and "these kind of files" just comes around rare. modern_primat ඞඞඞ ----TALK 20:48, 17 December 2024 (UTC)
- If we trust a user enough to decide if it is an emergency situation or not we can just give admin rights to this user. Deletion rights on Commons are a very dangerous tool as bad faith deletions can affect every other Wiki. For emergency deletions there is the WMF T&S team and also stewards could act if there is no local admin available. Additionally having people only deleting in emergency situations would create a log highlighting the deleted files where something needed to be hidden immediately. This would give people an easy way the grab the deleted file from some caching service. We also do not have a public log of oversight actions. GPSLeo (talk) 22:08, 17 December 2024 (UTC)
- this user group just for extraordinary situations. not regular. and "these kind of files" just comes around rare. modern_primat ඞඞඞ ----TALK 20:48, 17 December 2024 (UTC)
- Oppose per SHB; I'm usually supportive of unbundling, but I am unconvinced that this subset of deletions is a severe backlog that the admins can't handle alone. Queen of Hearts (talk) 20:49, 17 December 2024 (UTC)
- its not amount of files, it is about amount of time. modern_primat ඞඞඞ ----TALK 20:51, 17 December 2024 (UTC)
- Then apply for sysop; if you're competent and capable to be able to make such deletions, it will pass. If it doesn't, I don't know how I would feel a separate user group that only handles deletions. --SHB2000 (talk) 21:37, 17 December 2024 (UTC)
- @Modern primat, I don't know how many times and different ways this needs to be said, CSAM reports need to be emailed to
emergency@wikimedia.orgThey will respond within an hour. If they don't, add CA@wikimedia.org I don't know how else to say it: what you did was in direct violation of the edit notice on AN, and its there for a reason. All the Best -- Chuck Talk 22:45, 17 December 2024 (UTC)- This is the correct answer. Emailing that address will get immediate attention, it is monitored at all times, and they can zap such content completely out of existence. Dealing with this kind of content is their job. Beeblebrox (talk) 23:40, 17 December 2024 (UTC)
- Chuck is correct that you need to email WMF in this situation, but I just struck through the email he gave (which is the wrong one). The correct email, as the editnotice at COM:AN says, is legal-reportswikimedia.org. (Pinging to make sure you see this: @Alachuckthebuck and Beeblebrox: ) —Mdaniels5757 (talk • contribs) 21:15, 20 December 2024 (UTC)
- I'm not at all sure I agree with what it says in that edit notice. The emergency address is actively monitored at all times. I don't think legal is and we want that sort of content shot down the memory hole as quickly as humanly possible. Beeblebrox (talk) 03:04, 21 December 2024 (UTC)
- @Modern primat, I don't know how many times and different ways this needs to be said, CSAM reports need to be emailed to
- Then apply for sysop; if you're competent and capable to be able to make such deletions, it will pass. If it doesn't, I don't know how I would feel a separate user group that only handles deletions. --SHB2000 (talk) 21:37, 17 December 2024 (UTC)
- its not amount of files, it is about amount of time. modern_primat ඞඞඞ ----TALK 20:51, 17 December 2024 (UTC)
- Oppose There is no one I would trust with this right that I wouldn't trust with full sysop. The Squirrel Conspiracy (talk) 02:43, 18 December 2024 (UTC)
- @The Squirrel Conspiracy: You may find such users at Commons:Administrators/Nominations. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 13:19, 25 December 2024 (UTC)
- I don't think you understand my position. There will never be an editor that I would trust with this right that I wouldn't trust with full sysop. The Squirrel Conspiracy (talk) 18:41, 25 December 2024 (UTC)
- @The Squirrel Conspiracy: You may find such users at Commons:Administrators/Nominations. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 13:19, 25 December 2024 (UTC)
- Oppose Squirrel said my exact thoughts. Bastique ☎ let's talk! 03:39, 18 December 2024 (UTC)
- Oppose feels like a way to get around Commons:Administrators/Requests/Modern primat. Multichill (talk) 17:13, 24 December 2024 (UTC)
- ok... 😐 modern_primat ඞඞඞ ----TALK 17:22, 24 December 2024 (UTC)
- i dont care adminship my brother. i just care our community and our work. modern_primat ඞඞඞ ----TALK 17:23, 24 December 2024 (UTC)
- Oppose. I agree with other above. Anyone I would trust with
delete
right is someone I would also trust withsysop
rights. --Ratekreel (talk) 13:39, 25 December 2024 (UTC)
- The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
RfC: Changes to the public domain license options in the Upload Wizard menu
[edit]An editor has requested comment from other editors for this discussion. If you have an opinion regarding this issue, feel free to comment below. |
Should any default options be added or removed from the menu in the Upload Wizard's step in which a user is asked to choose which license option applies to a work not under copyright? Sdkb talk 20:19, 19 December 2024 (UTC)
Background
[edit]The WMF has been (at least ostensibly) collaborating with us during its Upload Wizard improvements project. As part of this work, we have the opportunity to reexamine the step that occurs after a user uploads a work that they declare is someone else's work but not protected by copyright law. They are then presented will several default options corresponding to public domain license tags or a field to write in a custom tag:
It is unclear why these are the specific options presented; I do not know of the original discussion in which they were chosen. This RfC seeks to determine whether we should add or remove any of these options. I have added one proposal, but feel free to create subsections for others (using the format Add license name
or Remove license name
and specifying the proposed menu text). Sdkb talk 20:19, 19 December 2024 (UTC)
Add PD-textlogo
[edit]Should {{PD-textlogo}} be added, using the menu text Logo image consisting only of simple geometric shapes or text
? Sdkb talk 20:19, 19 December 2024 (UTC)
- Support. Many organizations on Wikipedia that have simple logos do not have them uploaded to Commons and used in the article. Currently, the only way to upload such images is to choose the "enter a different license in wikitext format" option and enter "{{PD-textlogo}}" manually. Very few beginner (or even intermediate) editors will be able to navigate this process successfully, and even for experienced editors it is cumbersome. PD-textlogo is one of the most common license tags used on Commons uploads — there are more than 200,000 files that use it. As such, it ought to appear in the list. This would make it easier to upload simple logo images, benefiting Commons and the projects that use it. Sdkb talk 20:19, 19 December 2024 (UTC)
- Addressing two potential concerns. First, Sannita wrote,
the team is worried about making available too many options and confusing uploaders
. I agree with the overall principle that we should not add so many options that users are overwhelmed, but I don't think we're at that point yet. Also, if we're concerned about only presenting the minimum number of relevant options, we could use metadata to help customize which ones are presented to a user for a given file (e.g. a.svg
file is much more likely to be a logo than a.jpg
file with metadata indicating it is a recently taken photograph). - Second, there is always the risk that users upload more complex logos above the TOO. We can link to commons:TOO to provide help/explanation, and if we find that too many users are doing this for moderators to handle, we could introduce a confirmation dialogue or other further safeguards. But we should not use the difficulty of the process to try to curb undesirable uploads any more than we should block newcomers from editing just because of the risk they'll vandalize — our filters need to be targeted enough that they don't block legitimate uploads just as much as bad ones. Sdkb talk 20:19, 19 December 2024 (UTC)
- "we could use metadata" I'd be very careful with that. The way people use media changes all the time, so making decisions about how the software behaves on something like that, I don't know... Like, if it is extracting metadata, or check on is this audio, video, or image, that's one thing, but to say 'jpg is likely not a logo and svg and png might be logos' and then steer the user into a direction based on something so likely to not be true. —TheDJ (talk • contribs) 10:52, 6 January 2025 (UTC)
- Addressing two potential concerns. First, Sannita wrote,
- Oppose. Determining whether a logo is sufficiently simple for PD-textlogo is nontrivial, and the license is already frequently misapplied. Making it available as a first-class option would likely make that much worse. Omphalographer (talk) 02:57, 20 December 2024 (UTC)
- Comment only if this will result in it being uploaded but tagged for review. - Jmabel ! talk 07:14, 20 December 2024 (UTC)
- That should definitely be possible to implement. Sdkb talk 15:13, 20 December 2024 (UTC)
- Support Assuming there's some kind of review involved. Otherwise Oppose, but I don't see why it wouldn't be possible to implement a review tag or something. --Adamant1 (talk) 19:10, 20 December 2024 (UTC)
- Support for experienced users only. Sjoerd de Bruin (talk) 20:20, 22 December 2024 (UTC)
- Oppose peer Omphalographer ,{{PD-textlogo}} can use with a logo is sufficient simply in majority of countries per COM:Copyright rules (first sentence in USA and the both countries peer COM:TOO) my opinion (google translator). AbchyZa22 (talk) 11:02, 25 December 2024 (UTC)
- Oppose in any case. We have enough backlogs and don't need another thing to review. --Krd 09:57, 3 January 2025 (UTC)
- How about we just disable uploads entirely to eliminate the backlogs once and for all?[Sarcasm] The entire point of Commons is to create a repository of media, and that project necessarily will entail some level of work. Reflexively opposing due to that work without even attempting (at least in your posted rationale) to weigh that cost against the added value of the potential contributions is about as stark an illustration of the anti-newcomer bias at Commons as I can conceive. Sdkb talk 21:36, 3 January 2025 (UTC)
- Oppose. I think the template is often misapplied, so I do not want to encourage its use. There are many odd cases. Paper textures do not matter. Shading does not matter. An image with just a few polygons can be copyrighted. Glrx (talk) 19:47, 6 January 2025 (UTC)
- Support adding this to the upload wizard, basically per Skdb (including the first two sentences of their response to Krd). Indifferent to whether there should be a review process: on one hand, it'd be another backlog that will basically grow without bound, on the other, it could be nice for the reviewed ones. —Mdaniels5757 (talk • contribs) 23:57, 6 January 2025 (UTC)
General discussion
[edit]Courtesy pinging @Sannita (WMF), the WMF community liaison for the Upload Wizard improvements project. Sdkb talk 20:19, 19 December 2024 (UTC)
- Thanks for the ping. Quick note: I will be on vacation starting tomorrow until January 1, therefore I will probably not be able to answer until 2025 starts, if needed. I'll catch up when I'll have again a working connection, but be also aware that new changes to code will need to wait at least mid-January. Sannita (WMF) (talk) 22:02, 19 December 2024 (UTC)
- Can we please add a warning message for PDF uploads in general? this is currently enforced by abuse filter, and is the second most common report at Commons talk:Abuse filter. And if they user pd-textlogo or PD-simple (or any AI tag) it should add a tracking category that is searched by User:GogologoBot. All the Best -- Chuck Talk 23:21, 19 December 2024 (UTC)
- Yes, please. Even with the abuse filter in place, the vast majority of PDF uploads by new users are accidental, copyright violations, and/or out of scope. There are only a few appropriate use cases for the format, and they tend to be uploaded by a very small number of experienced users. Omphalographer (talk) 03:11, 20 December 2024 (UTC)
- Can we please add a warning message for PDF uploads in general? this is currently enforced by abuse filter, and is the second most common report at Commons talk:Abuse filter. And if they user pd-textlogo or PD-simple (or any AI tag) it should add a tracking category that is searched by User:GogologoBot. All the Best -- Chuck Talk 23:21, 19 December 2024 (UTC)
- Comment, the current version of the MediaWiki Upload Wizard contains the words "To ensure the works you upload are copyright-free, please provide the following information.", but Creative Commons (CC) isn't "copyright-free", it is a free copyright ©️ license, not a copyright-free license. I'm sure that Sannita is keeping an eye on this, so I didn't ping
herhim. It should read along the lines of "To ensure the works you upload are free to use and share, please provide the following information.". --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 12:19, 24 December 2024 (UTC)- @Donald Trung: Sannita (WMF) presents as male, and uses pronouns he/him/his. Please don't make such assumptions about pronouns. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:02, 24 December 2024 (UTC)
- My bad, I've corrected it above. For whatever reason I thought that he was a German woman because I remember seeing the profile of someone on that team and I probably confused them in my head, I just clicked on their user page and saw that it's an Italian man. Hopefully he won't feel offended by this mistake. Just saw that he's a fellow Whovian, but the rest of the comment remains unaltered as I think that the wording misrepresents "free" as "copyright-free", which are separate concepts. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 14:09, 24 December 2024 (UTC)
- (Hello, I'm back in office) Not offended at all, it happens sometimes on Italian Wikipedia too. Words and names ending in -a are usually feminine in Italian, with some exceptions like my name and my nickname that both end in -a, but are masculine. :) Sannita (WMF) (talk) 13:15, 2 January 2025 (UTC)
- Wiki markup: {{gender:Sannita (WMF)|male|female|unknown}} → male. Glrx (talk) 03:07, 3 January 2025 (UTC)
- (Hello, I'm back in office) Not offended at all, it happens sometimes on Italian Wikipedia too. Words and names ending in -a are usually feminine in Italian, with some exceptions like my name and my nickname that both end in -a, but are masculine. :) Sannita (WMF) (talk) 13:15, 2 January 2025 (UTC)
- My bad, I've corrected it above. For whatever reason I thought that he was a German woman because I remember seeing the profile of someone on that team and I probably confused them in my head, I just clicked on their user page and saw that it's an Italian man. Hopefully he won't feel offended by this mistake. Just saw that he's a fellow Whovian, but the rest of the comment remains unaltered as I think that the wording misrepresents "free" as "copyright-free", which are separate concepts. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 14:09, 24 December 2024 (UTC)
Media Blurring And Censoring
[edit]On Wikimedia there has been some inappropriate content and NSFW content and we should censor it by using the MediaSpoiler extension, to protect everyone. Do you agree? Wikan Boy 123 (talk) 06:36, 21 December 2024 (UTC)
- A simple word shall suffice as answer: No. I do not agree. Regards, Grand-Duc (talk) 07:57, 21 December 2024 (UTC)
- @Grand-Duc: Thanks. The OP has been blocked and their unblock request denied. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:31, 21 December 2024 (UTC)
- FYI the MediaSpoiler extension is not about censoring, it gives users an option to show/hide all media including NSFW ones. Prototyperspective (talk) 17:54, 27 December 2024 (UTC)
Nuevo Témplate PD-textflag
[edit]Buenas admin una propuesta es posible crear un nuevo témplate para las banderas simples como este (File:Bandera de Colina (Falcón).svg) esa bandera contiene con texto (below too) ,están de acuerdo crear nuevo témplate "PD-textflag"? AbchyZa22 (talk) 17:22, 25 December 2024 (UTC)
- Comment @Glrx:any opinion? (google translator). AbchyZa22 (talk) 12:26, 6 January 2025 (UTC)
- I don't see the need. For flags, it's usually the individual drawings which have copyright, not the design. The particular vector instructions in an SVG might have a copyright, even if the visual result does not, for example. So we should keep that licensing statement on that SVG. {{PD-text}} is fine if there is a very particular circumstance where it makes sense (it's really just another name for PD-ineligible anyways). Carl Lindberg (talk) 12:51, 6 January 2025 (UTC)
- Oppose per Clindberg. Also, the affected illustrations would be few. Glrx (talk) 19:33, 6 January 2025 (UTC)
I think we should expand Commons:Derivative works#But how can we illustrate topics like Star Wars or Pokémon without pictures? into its own essay page (much like Commons:But it's my own work!). This essay would discuss acceptable fan art, costumes and cosplay, architecture/sculpture deriving from non-free media in FoP countries, and blacking out non-free architecture/sculpture in countries without sufficient FoP provisions, among others. JohnCWiesenthal (talk) 21:58, 26 December 2024 (UTC)
- Consider this a more general question, but isn't the answer there in most (if not all instances) to upload a low quality image of the character to Wikipedia as fair use? --Adamant1 (talk) 23:44, 26 December 2024 (UTC)
- That's one part of it; the other (which is already touched upon in COM:DW) is that pictures are not mandatory. Sometimes there's simply no way to usefully depict a topic using freely licensed imagery, and that's okay. An article can still use words to describe its topic without showing a picture of it. Omphalographer (talk) 03:26, 27 December 2024 (UTC)
Sometimes there's simply no way to usefully depict a topic using freely licensed imagery, and that's okay.
What about File:P Harry Potter-icon.svg, which has been considered as an acceptable depiction of Harry Potter? That image is used as an icon on multiple wikis. JohnCWiesenthal (talk) 06:04, 27 December 2024 (UTC)- Harry Potter characters can be drawn and released under CCBY based on the descriptions in the books if not based on or looking very similar to the copyrighted book cover or films. As for Pokemon, one can draw a fictional pokemon to illustrate how these look like without violating copyright. Prototyperspective (talk) 17:56, 27 December 2024 (UTC)
- This is exactly what I am discussing. I am referring to Commons-acceptable depictions or allusions to non-free works. I would list as examples acceptable fan art (those based on an abstract depiction of the work, those illustrating a general idea rather than a specific depiction of the idea, those alluding to a work with separately uncopyrightable elements and those falling below TOO), derivative sculptures in FOP countries, and blacked-out photographs implying the existence of non-free, publicly-displayed architecture/sculpture. (Examples of these would include File:P Harry Potter-icon.svg, File:Skibidi Toilet.svg, File:Bullwhip and IJ hat.jpg, File:Captain America Shield.svg, File:Star Wars characters at Madame Tussaud.jpg and File:Louvre pyramid - blackout.jpg, respectively.) JohnCWiesenthal (talk) 20:09, 27 December 2024 (UTC)
- I think it should be discussed here (maybe also here). Prototyperspective (talk) 21:35, 27 December 2024 (UTC)
- It doesn't really seem like it's clear what kind of fan art is or isn't acceptable on here outside us generally keeping images that are in use on other projects. But if that's the direction your suggesting things go in then there should be a better standard and one that doesn't just encourage to add a bunch of fan art to Wikipedia in mass and/or in instances where it's not appropriate. For instance, English Wikipedia generally doesn't want AI generated images to be used in articles that don't specifically relate to AI. It's kind of going against that by dictating on our end what kind content is acceptable on their project by saying fan art is in scope. Really, this whole thing is framed backwards. We don't determine what can or can't be used to "illustrate topics" on Wikipedia to begin with. --Adamant1 (talk) 21:40, 27 December 2024 (UTC)
- This is exactly what I am discussing. I am referring to Commons-acceptable depictions or allusions to non-free works. I would list as examples acceptable fan art (those based on an abstract depiction of the work, those illustrating a general idea rather than a specific depiction of the idea, those alluding to a work with separately uncopyrightable elements and those falling below TOO), derivative sculptures in FOP countries, and blacked-out photographs implying the existence of non-free, publicly-displayed architecture/sculpture. (Examples of these would include File:P Harry Potter-icon.svg, File:Skibidi Toilet.svg, File:Bullwhip and IJ hat.jpg, File:Captain America Shield.svg, File:Star Wars characters at Madame Tussaud.jpg and File:Louvre pyramid - blackout.jpg, respectively.) JohnCWiesenthal (talk) 20:09, 27 December 2024 (UTC)
- Harry Potter characters can be drawn and released under CCBY based on the descriptions in the books if not based on or looking very similar to the copyrighted book cover or films. As for Pokemon, one can draw a fictional pokemon to illustrate how these look like without violating copyright. Prototyperspective (talk) 17:56, 27 December 2024 (UTC)
- @Adamant1: Not all language Wikipedias accept fair use; the Spanish one certainly doesn't. See the list at m:nfc. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 04:18, 27 December 2024 (UTC)
- @Jeff G.: Fair enough. I was mainly thinking about English Wikipedia, but you make a valid point. --Adamant1 (talk) 21:40, 27 December 2024 (UTC)
- That's one part of it; the other (which is already touched upon in COM:DW) is that pictures are not mandatory. Sometimes there's simply no way to usefully depict a topic using freely licensed imagery, and that's okay. An article can still use words to describe its topic without showing a picture of it. Omphalographer (talk) 03:26, 27 December 2024 (UTC)
Category naming for proper names
[edit]There are currently multiple CfD disputes on the naming of categories for proper names (Commons:Categories for discussion/2024/12/Category:FC Bayern Munich and Commons:Categories for discussion/2024/12/Category:Polonia Warszawa). The problem is caused by an unclear guideline. At COM:CAT the guideline says: "Category names should generally be in English. However, there are exceptions such as some proper names, biological taxa and names for which the non-English name is most commonly used in the English language". The first problem is that sometimes people do not notice that there is no comma before the "for" and think that the condition applies for all cases. This might also be caused by some wrong translations. The other problem is the "some" as there are no conditions defined when and when not this applies. I think we have four options:
- Translate all proper names
- Translate proper names when English version is commonly used (enwiki uses a translated name)
- Do not translate proper names but transcribe non Latin alphabets
- Always use the original proper name
Redirects can exist anyways. The question what to do with locations they have multiple official local names in multilingual regions is a different topic to be discussed after there is a decision on the main question. GPSLeo (talk) 11:40, 28 December 2024 (UTC)
- I don't think it's a bad thing that the rule gives room for case-by-case decisions. The discussions about this are very long, but it's rarely about a real problem with finding or organising content. So my personal rule would be something like ‘If it's understandable to an English speaker, is part of a subtree curated by other users on an ongoing basis, and you otherwise have no engagement in that subtree, don't suggest a move just because of a principle that makes no difference here.’ Rudolph Buch (talk) 14:37, 28 December 2024 (UTC)
- 100% That should be the standard. People are to limp wristed when it comes to dealing with obviously disingenuous behavior or enforcing any kind of standards on here though. But 99% of time this is only a problem because someone wants to use category names as their personal nationalist project. It's just that no one is willing to put their foot down by telling the person that's not what categories are for. Otherwise this would be a nonissue. But the guideline should be clear that category names shouldn't be in the "native language" if it doesn't follow the category tree and/or is only being done for personal, nationalistic reasons. --Adamant1 (talk) 18:40, 28 December 2024 (UTC)
- I think that at least in most cases the right answer is something like #2 except:
- I wouldn't always trust en-wiki to get it right, especially on topics where only one or two editors have ever been involved there, and we might well have broader and more knowledgeable involvement here.
- Non-Latin alphabets should be transliterated.
- The thing is, of course, that is exactly the one that frequently requires judgement calls, so we are back where we started.
- Aside: in my experience, some nationalities (e.g. German) have a fair number of people who will resist the use of an English translation no matter how common, while others (e.g. Romanian) will "overtranslate". On the latter, as an American who has spent some time in Romania, I'm always amazed when I see Romanians opt for English translations for things where I've always heard English-speakers use the Romanian (e.g. "Roman Square" for "Piața Romana"; to my ear, it is like calling the composer Giuseppe Verdi "Joseph Green"). - Jmabel ! talk 18:59, 28 December 2024 (UTC)
- I've made the sentence in COM:CAT, quoted by OP, into a list, to remove ambiguity. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:56, 12 January 2025 (UTC)
RfC: Should Commons ban AI-generated images?
[edit]An editor has requested comment from other editors for this discussion. If you have an opinion regarding this issue, feel free to comment below. |
Should Commons policy change to disallow the uploading of AI-generated images from programs such as DALLE, Midjourney, Grok, etc per Commons:Fair use?
Background
[edit]AI generated images are a big thing lately and I think we need to address the elephant in the room: they have unclear copyright implications. We do know that in the US, AI-generated images are not copyrighted because they have no human author, but, they are still very likely considered derivative works of existing works.
AI generators use existing images and texts in their datasets and draw from those works to generate derivatives. There is no debate about that, that is how they work. There are multiple ongoing lawsuits against AI generator companies for copyright violation. According to this Washington Post article, the main defense of AI generation rests on the question of if these derivative works qualify as fair use. If they are fair use, they may be legal. If they are not fair use, they may be illegal copyright violations.
However, as far as Commons is concerned, either ruling would make AI images go against Commons policy. Per Commons:Fair use, fair use media files are not allowed on Commons. Obviously, copyright violations are not allowed either. This means that of the two possible legal decisions about AI images, both cannot be used on Commons. There is no possible scenario where AI generated images are not considered derivative in some way of copyrighted works; it's just a matter of if it's fair use or not. As such, I think that AI-generated images should be explicitly disallowed in Commons policy.
Discussion
[edit]Should Commons explicitly disallow the uploading of AI-generated images (and by proxy, should all existing files be deleted)? Please discuss below. Di (they-them) (talk) 05:00, 3 January 2025 (UTC)
- Enough. It is a great waste of time to have the same discussion over and over and over. I find it absurd to think that most AI creations are going to be considered derivative works. The AI programs may fail that test, but what they produce clearly isn't. Why don't we wait until something new has happened in the legal sphere before we start this discussion all over?--Prosfilaes (talk) 06:21, 3 January 2025 (UTC)
- OpposeNo, it shouldn't and they are not derivative works and if they are uploaded by the person who prompted them they also are not fair use but PD (or maybe CCBY). They are not derived from millions of images, like images you draw are not "derived" from public works you previously saw (like movies, public exhibitions, and online art) that inspired or at least influenced you.
There is no debate about that, that is how they work.
False.the main defense of AI generation rests on the question of if these derivative works qualify as fair use.
Also false. Prototyperspective (talk) 09:52, 3 January 2025 (UTC)
- Most AI-generated images, unless the AI is explicitly told to imitate a certain work, are not "derivative works" in the sense of copyright, because the AI does a thing similar to humans when they create new works: Humans have knowledge of a lot of pre-existing works and create new works that are inspired by them. AI, too, "learns" for example what the characteristics of Impressionist art are through the input of a lot of Impressionist paintings, and is then able to create a new image in Impressionist style, without that image being a derivative work of any specific work where copyright regulations would apply - apart from the fact, of course, that in this specific example, most of the original works from the Impressionist period are public domain by now anyway. The latter would also be an argument against the proposal: Even if it were the case that AI creates nothing but "derivative works" in the sense of copyright, derivative works of public domain original art would still be absolutely fine, so this would be no argument for completely banning AI images. Having said all that, I think that we should handle the upload of AI images restrictively, allow them only selectively, and Commons:AI-generated media could be a bit stricter. But a blanket ban wouldn't be a reasonable approach, I think. Gestumblindi (talk) 11:12, 3 January 2025 (UTC)
- We want images for a given purpose. It's a user who uploads such an image. He is responsible for his work. We shouldn't care how much assistance he had in the creation process. But I'd appreciate an agreement on banning photorealistic images designed for deceiving the viewer. AI empowers users to create images of public (prominent) people and have these people appear more heroic, evil, clean, dirty, important or whatever than they are. But we have this problem with photoshop already. I don't want such images in Wikimedia even if most people know a given image to be a hoax (such as those of Evil Bert from sesame street). Vollbracht (talk) 01:42, 4 January 2025 (UTC)
- This discussion isn't about deception or usefulness of the images, it's about them being derivative works. Di (they-them) (talk) 02:12, 4 January 2025 (UTC)
- You got the answer on "derivative works" already. I can't see a legal difference between a photoshopped image and an image altered by AI or a legal difference between a paintbrush artwork and an AI generated "artwork". Still as Germans say: "Kunst kommt von können." (Art comes from artistic abilities.) It's not worth more than the work that went into it. If you spend no more than 5 min. "manpower" in defining what the AI shall generate, you shouldn't expect to have created something worthy of any copyright protection or anything new in comparison to an underlying work of art. We don't need more rules on this. When deriving something keep the copyright in mind - no matter what tool you use. Vollbracht (talk) 03:34, 4 January 2025 (UTC)
- This discussion isn't about deception or usefulness of the images, it's about them being derivative works. Di (they-them) (talk) 02:12, 4 January 2025 (UTC)
- Look at other free-upload platforms and you get to the inevitable conclusion that AI uploads will ultimately overwhelm Commons by legal issues or sheer volume. Because people. But with no new legal impulses and no cry for action from tech Commons, I see no need for a new discussion at this point. Alexpl (talk) 05:59, 4 January 2025 (UTC)
- As I understand it, there are three aspects of an AI image:
- The creations caused by the computer algorithm. Probably not copyrighted anywhere because an algorithm is not an animal.
- An AI prompt, entered by a human. This potentially exceeds the threshold of originality, in which case the AI output probably is a derivative work of the prompt. Maybe we need a licence of the AI prompt from the person who wrote it, unless the prompt itself is provided and determined to be below the threshold of originality.
- Sometimes an AI image or text is a derivative work of an unknown work which the AI software found somewhere on the Internet. Here it might be better to assume good faith and only delete if an underlying work is found. --Stefan2 (talk) 11:54, 7 January 2025 (UTC)
- Re 2: note that short quotes can also be put onto Wikipedia which is CCBY-SA and Wikiquote. Moreover, that applies to the prompt, but media files can also be uploaded without input prompt attached. In any case, if the prompt engineer licenses the image under CCBY or PD then it can be uploaded and I only upload these kind of AI images even if further may also be PD. Re 3: that depends on the prompt, if you're tailoring the prompt in some specific way so it produces an image like that then it may create an image looking very similar...e.g. if you prompt La Vie, 1903 painting by Pablo Picasso, in the style of Pablo Picasso, the life it's likely produce an image looking like the original. I also don't think that it would be good to assume that active contributors would without disclosing it do so. Prototyperspective (talk) 12:16, 7 January 2025 (UTC)
- If you ask for
La Vie, 1903 painting by Pablo Picasso, in the style of Pablo Picasso, the life
, then you are very likely to get a derivative work. - If you ask for
a picture of a cat
, then there is no problem with #2, but you have no way of knowing how the AI tool produced the picture, so you are maybe in violation of #3 (you'll find out if the copyright holder sues you). --Stefan2 (talk) 12:53, 7 January 2025 (UTC)
- If you ask for
- Oppose Whatever the details of AI artwork and derivatives are, there's a serious lack of people checking for copyright violations to begin with and anyone who tries to follow any kind of standards when it comes to AI artwork just get cry bullied, threatened, and/or sanctioned for supposedly causing drama. So there's really no point in banning it or even moderating in any way what-so-ever to begin with. The more important thing is properly labeling it as such and not letting people pass AI artwork off on here as legitimate, historically accurate images. The only other alternative would be for the WMF to take a stance on it one or another, but I don't really see that happening. There's nothing that can or will be done about all the AI slop on here until then though. --Adamant1 (talk) 06:51, 9 January 2025 (UTC)
- Conditional Support. I do not support an outright and total ban of any and all AI generated imagery (in short: AI file) on Commons, that's going too far. But I would support a strict enforcement and an strict interpretation of our scope policy in regards to such imagery. By that, I mean the following.
- I support the concept that any upload of AI generated imagery has to satisfy the existence and demonstration of a concise and legitimate use case on Wikimedia projects before uploading the data on Commons. If any AI file is not used, then it's blanketly out of scope. Reasoning: Most Wikimedia projects have a rule of only hold verifiable information. AI files have a fundamental issue with this requirement of verifiability, as the LLM models (Large Language Models) used do not allow for a correlation between input and output. This is exemplified by the inability of the LLM creators to remove the results of rights infringing training data from the processing algorithms, they can only tweak the output to forbid the LLM outputting infringing material like song or journalistic texts.
- I support a complete ban of AI generated imagery that depicts real-life celebrities or historical personnages. For celebrities, the training data is most likely made of copyrighted imagery, at least partly. For historical personnages, AI files will likely deceive a viewer or reader in that the AI file is historically accurate. Such a result, deceiving, is against our project scope, see COM:EDUSE.
- I support the notion of using AI files to illustrate concepts that fall within the purview of e.g. social sciences. I could very well see a good use case to illustrate e.g. poverty, homelessness, sexuality topics and other potentially contentious themes at the discretion of the writing Wikipedian. AI files may offer the advantage in that most likely no personality rights will get touched by the depiction. For this use case, AI files would have to strictly satisfy our COM:Redundant policy: as soon as there is an actual human made media file, a photograph, movie or sound recording that actually fulfils the same purpose as the AI file, then the AI file gets blanketly out of scope.
- I am aware that these opinions are quite strict and more against AI generated imagery. That's due to my background thoughts about the known limitations of generative software and a currently unclear IP right situation about the training data and the output of these LLM. I lack the imagination on how AI files could currently serve to improve the mission of disseminating knowledge, save for some limited use cases. Regards, Grand-Duc (talk) 19:07, 9 January 2025 (UTC) PS. For further reference: Commons:Deletion requests/Files in Category:AI-generated portraits.
- Re 1.: some people complain that people upload images without use case, other people complain when people add useful media they created themselves to articles – it's impossible to make it right. Moreover, Commons isn't just there as a hosting site for Wikipedia & Co but also a standalone site. Your point about LLM is good and I agree but this discussion is not about LLMs but AI media creation tools.
- Re 2.: paintings are also inaccurate. Images made or modified with AI (or made with AI and then edited with eg Photoshop) are not necessarily inaccurate. I'm also very concerned about the known limitations of generative software but that doesn't really support your points and doesn't support that Commons should censor images produced with a novel toolset. Prototyperspective (talk) 19:34, 9 January 2025 (UTC)
- All the AI media creation tools, be it Midjourney, Grok, Dall-E and the plethora of other offerings are based upon LLM. So, any discussion about current "AI media creation tools" is the same as discussing the implications of LLM in practice, IP law and society. And yes, Commons wants to also serve other sites and usages (like school homework for my son, did so in the past and will do in the future). But as anybody may employ generative AI, there is no need to use Commons to endorse any and all potential use - as I tried to demonstrate, AI files are only seldom useful to disseminate knowledge, see Commons:Project scope.
- Paintings are often idealized, yes, introducing inaccuracies. But in that case, the work is vouched for by a human artist, who employed his creativity and his knowledge based upon the learnings in his life to produce a given result. These actions cannot be duplicated at the moment by generative AI, only imitated. And while mostly educated humans will recognize a painting as a creation of a fellow human that will certainly contain inaccuracies, the stories about "alternative facts", news bubbles, deepfakes etc. show that generative AI products are often neither recognized as such and taken at face value. Regards, Grand-Duc (talk) 19:56, 9 January 2025 (UTC)
- No, those are not the same implication. You however got closer to understanding the concept and basics of prompt engineering which is about getting the result you intend or imagined despite all the flaws LLMs have.
People have developed all sorts of techniques and tricks to make these tools produce the images they have in mind at a quality they'd like to have them. If you think people ask AI generator tools to illustrate a subject by just providing the concept's name like "Cosmic distance ladder" and then assuming it produces an accurate good image showing that you'd be wrong. Moreover, most AI images do look like digital art and not photos and are generally labelled as such. Prototyperspective (talk) 22:05, 9 January 2025 (UTC)
- No, those are not the same implication. You however got closer to understanding the concept and basics of prompt engineering which is about getting the result you intend or imagined despite all the flaws LLMs have.
- Oppose per Prosfilaes it is not at all guaranteed that we're in a Hobson's choice here. Some AI images may well be bad, but banning them all just in case is ridiculous. --GRuban (talk) 21:53, 9 January 2025 (UTC)
- Support with the possible exception of images that are themselves notable. Blythwood (talk) 20:22, 11 January 2025 (UTC)
- Mostly support, but not for the reasons proposed. While I don't disagree with the argument that AI-generated content could potentially be considered a derivative work, this argument isn't currently supported by law, and I don't think that's likely to change in the near future. However, very few AI-generated images have realistic educational use. Generated images of real people, places, or objects are inherently inferior to actual photos of those subjects, and often contain misleading inaccuracies; images of speculative topics tend towards clichés (like holograms and blue glowing robots for predictions of the future, or ethnic stereotypes for historical subjects); and AI-generated diagrams and abstract illustrations are inevitably meaningless gibberish. The vast majority of AI-generated images inserted into Wikimedia projects are rejected by editors on those projects and subsequently deleted from Commons; those that aren't removed tend to be more the result of indifference (especially on low activity projects) than because they actually provide substantial educational value. Omphalographer (talk) 20:15, 20 January 2025 (UTC)
Upload of preview images for existing svg files
[edit]If we'd allow the original uploader of an SVG file to provide manually generated reference preview png files we'd have a number of advantages:
- The uploader could provide resolutions optimized for the purpose the SVG file was designed for.
- The reference preview could show how rsvg-convert should have rendered the SVG file in case of unexpected problems. If it's the uploaders fault, we, the comunity could give helpful hints. Otherwise we could suggest workarounds or find an admin who might solve the problem.
- The reference preview could reveal how the user agent (firefox e. g.) should render the SVG file. In case of differences the user might recognize the necessity to install a given font (listed in meta:SVG fonts) to have his user agent render the file the intended way.
- The file might probably be used for its purpose in WP e.g. in spite of rendering problems with rsvg-convert.
Current example: I just uploaded file:Arab Wikimedia SVG fonts.svg. It started with <svg version="1.1" xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" width="210mm" height="594mm" viewBox="0 0 3535 9999">
. Client side rendering allows printing this file on 2 pages A4 with perfectly rendered characters. It was impossible though to have an automatic generated preview file with even a single character being readable. I had to change the attributes to <svg ... width="3535" height="9999" viewBox="0 0 3535 9999">
. But now the informations on intended image and font sizes are gone. Vollbracht (talk) 00:58, 4 January 2025 (UTC)
- Oppose. SVG intends to be scalable (i.e., no fixed size), so the notion of fixed or minimum sizes is counter to the intent. In general, SVG does not scale fonts linearly. Text that is a few pixels high will not be readable. Furthermore, font specifications and substitutions are problematic. SVG files that use
text
elements should expect font substitutions rather than exact rendering. Settingwidth
andheight
is also problematic: do you want to specify a fixed size, do you want the image to fill its designated container, or do you want to be able to pan/zoom in that container? SVG also does not have the notion of a "page". Glrx (talk) 01:31, 4 January 2025 (UTC)- SVG intends to stay in high quality when scaling. It's not limited to presentations that are size independent or tied to specific output media. SVG allows drawing a ruler that in original size has correct dimensions when printed or shown on a correctly installed monitor. Sometimes I want to pan/zoom my container. Sometimes I don't. SVG allows both (even scaling one axis only).
- And how does SVG scale fonts if not linearly?. What is pixels but a unit of measurement based on 96 dpi monitors? And, yes, SVG files that use
text
elements should expect font substitutions but within limits. - My problem in the current example was that rsvg-convert took my mm information as based on 96dpi monitors as well. But per definition they are to be applied to different output media. So ideally preview images should have been generated for hor. 220 dots in total (for WP-thumb), 96dpi (for classic low res. monitors) and at least 300 dpi (for low res. laser printers). Vollbracht (talk) 02:42, 4 January 2025 (UTC)
- Oppose, mostly. Providing a preview at upload time of how Wikimedia's servers will render an SVG (and a warning if it fails to render) is a good idea, and one which I think should be followed up on. But allowing uploaders to override that preview with a custom image is not viable - it'd inevitably lead to situations where there's mismatches between SVG content and its previews, especially as files are updated. If you're unhappy with how an SVG is rendered, change the SVG to render properly, or file a bug if something is unambiguously wrong. Omphalographer (talk) 02:36, 4 January 2025 (UTC)
- Yes! Mismatches between custom preview and updated SVG files are a problem. So in most cases we will avoid that rather than accepting such problems. But at least in some cases a solution could be defining a custom set of preview resolution definitions. By what chance do we have such a possibility sometime in the future? Vollbracht (talk) 02:53, 4 January 2025 (UTC)
- What are you trying to accomplish, precisely? MediaWiki generates image thumbnails on demand - the set of resolutions listed on the file page is just a couple of guesses at sizes that users might want to look at, not the sum total of all sizes that can be generated. Omphalographer (talk) 03:30, 4 January 2025 (UTC)
- The user provided an example for the problem and proposed a solution. It seems to me that them was perfectly clear about what them is "trying to accomplish". The MediaWiki software is faulty WRT SVG, and them proposes a fine workaround, that can be adopted immediatly, while the SVG problam has been there for many years and is probably to stay for many more years. C.Suthorn (@Life_is@no-pony.farm - p7.ee/p) (talk) 10:00, 4 January 2025 (UTC)
- What are you trying to accomplish, precisely? MediaWiki generates image thumbnails on demand - the set of resolutions listed on the file page is just a couple of guesses at sizes that users might want to look at, not the sum total of all sizes that can be generated. Omphalographer (talk) 03:30, 4 January 2025 (UTC)
- Yes! Mismatches between custom preview and updated SVG files are a problem. So in most cases we will avoid that rather than accepting such problems. But at least in some cases a solution could be defining a custom set of preview resolution definitions. By what chance do we have such a possibility sometime in the future? Vollbracht (talk) 02:53, 4 January 2025 (UTC)
Bø
[edit]Happy new year folks! "Caregory:Bø, Midt-Telemark" should be merged with "Category:Bø i Telemark", because it is the same city. Tollef Salemann (talk) 23:12, 4 January 2025 (UTC)
- Some of pictures in "Bø, Telemark" refer to the former municipality, but the rest is just the city. Not sure what to do with some of it and what is the easiest way to solve it. Maps are of the municipality, but most of the stuff is the city and people from the city. Tollef Salemann (talk) 23:15, 4 January 2025 (UTC)
Restrict administrators from blocking or sanctioning users in certain instances
[edit]I'm not going to point fingers but there's been multiple instances over the last couple of years where I've seen administrators block people in cases where they we're clearly involved in a dispute with the user at the time and/or had very little participation in the project to begin with. Probably in the second instance it was because their canvased off site. Which should never be acceptable. So I'm proposing two things here.
1. An administrator should not be able to block or sanction a user that they are clearly involved in a personal dispute with at the time.
2. Administrator's who have little or no participation in the project should not be doing "drive by" blocks or sanctions, period.
Nor should an administrator who meets either criteria be able to deny an unblock request.
In both cases the block, sanction, or denial of an unblock request should be reversed as invalid. There's absolutely no instance where an administrator should be able to block someone to win an editing dispute or do so as a way to prove a point because they don't like the user or how they communicate. Let alone should an administrator who only superficially participates in the project be able to block or sanction people. There's enough well established, uninvolved administrators to block or sanction a user if their behavior is actually that much of an issue. Adamant1 (talk) 08:30, 9 January 2025 (UTC)
- Oppose There's clearly a specific incident that you see as a problem. If that's the case, you should go to Commons:Administrators' noticeboard and ask for a review of that specific administrator's actions. This proposal, as written, is a) too vague to be enforceable (what constitutes "involved" and "little participation"?), and b) already reflects community norms (if an admin is blocking someone to "win" a personal dispute, that's already a problem, hence me suggesting you go to COM:AN). The Squirrel Conspiracy (talk) 08:46, 9 January 2025 (UTC)
- @The Squirrel Conspiracy: It shouldn't be that hard to figure out when an administrator blocked a user to get their way in a personal dispute. It's not that vague of a word. Also, you can say it's already a problem, but it happens pretty frequently on here and it's never reverted because people play defense for the admin or act like the user is making excuses for their behavior. There's no reason the block would be reverted if there's no guideline saying it's not acceptable anyway. By "little participation" I mean an administrator who has only made a few edits in the last year and/or has very little experience with the project outside of that issue. Again, it shouldn't be that hard to determine if an administrator is established here or not. Just look at their edit history. If it's essentially non-exiting and their clearly here just to block the user, but not do any other editing, then they aren't established enough. It's not that complicated. --Adamant1 (talk) 09:34, 9 January 2025 (UTC)
- i would give weak support to this. but i agree with squirrel. modern_primat ඞඞඞ ----TALK 14:12, 9 January 2025 (UTC)
- and also any admin should not give block to the user who get (personal)trouble with him in the past. that admin should just report him in com:an/u. modern_primat ඞඞඞ ----TALK 14:15, 9 January 2025 (UTC)
- Oppose. Admins are already elected partially due to their activity. Admins who are inactive are already automatically removed, and we already have deadminship procedures. Specific Admin actions may already be addressed at COM:AN. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 09:47, 9 January 2025 (UTC)
- @Jeff G.: If this stuff already isn't acceptable why not make it a part of Commons:Blocking policy then? Seriously, if the proposal already reflects community norms then what's the difference if it's part of the blocking policy? --Adamant1 (talk) 10:06, 9 January 2025 (UTC)
- I think it could be added somehow but not with the strict wording you proposed. Most blocks by involved admins are emergency blocks to stop ongoing harassment or edit wars. Such blocks need to be allowed as we do not have enough admins to always get a second opinion within a very short time. For unblocks we already have an uninvolved admin guideline. I think we should make the inactivity guidelines a bit more strict that the technical number of admins gets closer to the number of really available admins. GPSLeo (talk) 12:07, 9 January 2025 (UTC)
- You make a fair point. I'm not necessarily looking to keep admins from being able to do blocks in cases of edit waring or harassment. So I don't have an issue with the specific wording being loosened or otherwise modified this is approved. Usually proposals are rough drafts of the final wording in the guideline anyway. Now that you mention it though the "drive by" blocking could probably be solved by just making the inactivity guidelines a little more strict. I don't have an issue with doing that either. --Adamant1 (talk) 12:22, 9 January 2025 (UTC)
- I think it could be added somehow but not with the strict wording you proposed. Most blocks by involved admins are emergency blocks to stop ongoing harassment or edit wars. Such blocks need to be allowed as we do not have enough admins to always get a second opinion within a very short time. For unblocks we already have an uninvolved admin guideline. I think we should make the inactivity guidelines a bit more strict that the technical number of admins gets closer to the number of really available admins. GPSLeo (talk) 12:07, 9 January 2025 (UTC)
- @Jeff G.: If this stuff already isn't acceptable why not make it a part of Commons:Blocking policy then? Seriously, if the proposal already reflects community norms then what's the difference if it's part of the blocking policy? --Adamant1 (talk) 10:06, 9 January 2025 (UTC)
- Oppose When I give a warning to an ill-behaved user, there's about a one-in-five chance that they then attack me. There is no way in the world that should disqualify me from blocking them because they have created a "conflict" or "dispute" with me.
- On the other point: if someone has qualified as an admin, the community has decided that they generally trust this person's judgment. If they are now less active on Commons, that doesn't mean their judgment has deteriorated. I'm not terribly active on en-wiki, where I remain an admin. I still would have no hesitancy to block someone there if I ran across something egregious. - Jmabel ! talk 17:46, 9 January 2025 (UTC)
- I will add, though: there is a problem with certain admins using a block when they are in a content dispute with someone, something where a block should never have entered the picture. At most, they should have brought that to COM:AN/U and let someone else make a decision. If some admin has a pattern of doing that repeatedly, someone should make the case to have them de-admin'd. But the problem isn't that they blocked someone they were in conflict with, the problem is that they blocked someone because they were in conflict with them. - Jmabel ! talk 17:51, 9 January 2025 (UTC)
- @Jmabel: The problem is that someone who's blocked inherently can't take it to ANU. Then you end up with situations like what happened with Enhancing999 where he stopped contributing because his complaints after the fact weren't taken seriously. I've ran into similar situations myself. The fact is that it's much harder (if not impossible) to deal with a bad block after the person is unblocked. You can't call foul while blocked either because admins just play defense for each other and reject unblock requests by default regardless of the actual merits. So involved blocks just shouldn't happen to begin with. It's certainly not something that's worth losing otherwise productive editors over. --Adamant1 (talk) 22:50, 9 January 2025 (UTC)
- Clearly, the person who has been blocked can't take it to AN/U, at least not for the duration of the block. The point is that someone else who sees a pattern of abuse by an administrator can.
- @Adamant1: unless I'm mistaken, you've been banned from bringing issues to AN/U yourself, because it was perceived that you abused that. (Correct me if I am wrong about the ban.) I think that you are skating on thin ice here discussing particular AN/U cases here. I was going to let it slide because your initial proposal made a point of not singling anyone out, but now you have.
- Since you bring up that specific case, I will briefly address it here but, again, I'd prefer you drop the matter for the reasons just stated. The only time User:Enhancing999 has ever been blocked, they were blocked for a week. I see nothing wrong with the process. There was a broad consensus to block. An uninvolved administrator. Taivo came in and decided the length of the block, and decided precisely that only this short block was in order. Frankly, Taivo may have been doing Enhancing999 a relative favor: someone else might have blocked them a lot longer. There was certainly nothing wrong with him coming in, sizing up the discussion, and making a determination. That is a lot of what admins constantly do on DRs and the like. It is no less appropriate on AN/U. - Jmabel ! talk 23:08, 9 January 2025 (UTC)
- @Jmabel: I specifically avoided mentioning ANU in the original proposal and none of the instances that I have in mind specifically inolve ANU. I'm not topic banned from discussing administrator behavior in general either and if an administrator blocks someone that their in a dispute with it inherently doesn't involve ANU. THATS THE PROBLEM!!!!! So I don't see what the issue with this proposal is in that regard. The same goes for me refering to ANU in an off hand way. Correct me if I'm wrong, but it's not a violation of a topic to say ANU isn't the appropriate way to deal with something if someone else brings it up.
- @Jmabel: The problem is that someone who's blocked inherently can't take it to ANU. Then you end up with situations like what happened with Enhancing999 where he stopped contributing because his complaints after the fact weren't taken seriously. I've ran into similar situations myself. The fact is that it's much harder (if not impossible) to deal with a bad block after the person is unblocked. You can't call foul while blocked either because admins just play defense for each other and reject unblock requests by default regardless of the actual merits. So involved blocks just shouldn't happen to begin with. It's certainly not something that's worth losing otherwise productive editors over. --Adamant1 (talk) 22:50, 9 January 2025 (UTC)
- With User:Enhancing999 my issue is purely with how it was handled on his talk page after he was blocked. I don't care about, nor was I involved in the ANU complaint. But it's not an ANU issue at that point as far as I'm concerned. Say it is though, cool. Then I'll purely speak about my own experiences. At least in my experience I was blocked by a clearly involved admin (again, in a way that didn't involve ANU what-so-ever) and there wasn't any way to deal with it either at the time or after the fact. But apparently I should just accept that and not talk about it because I was topic banned from ANU a year later. Even though again, it had absolutely nothing to do ANU. Right. --Adamant1 (talk) 23:22, 9 January 2025 (UTC)
- @Adamant1, I'm not involved, and I'm not an admin, but I can warn you to be extremely careful when dealing with you topic ban. @Jmabel has been incredibly patient and mellow with you, but you are reaching the end of the ROPE. Be careful with what you say next, and I would recommend taking a walk after writing your next post, but before posting it. All the Best -- Chuck Talk 01:02, 10 January 2025 (UTC)
- I don't have anything else to say about it. The fact is that there isn't and never will be even the most basic standards for how admins behave or use their privileges on here. I have a right to say that something has absolutely nothing what-so-ever to do with ANU on my end if someone claims I'm violating the topic ban in the meantime though. I didn't say crap about ANU and I'm not responsible for what other people decide to talk about. Have fun shotting the messanger though. Its impossible to discuss anything on here from a general perspective without it turning personal.
- @Adamant1, I'm not involved, and I'm not an admin, but I can warn you to be extremely careful when dealing with you topic ban. @Jmabel has been incredibly patient and mellow with you, but you are reaching the end of the ROPE. Be careful with what you say next, and I would recommend taking a walk after writing your next post, but before posting it. All the Best -- Chuck Talk 01:02, 10 January 2025 (UTC)
- With User:Enhancing999 my issue is purely with how it was handled on his talk page after he was blocked. I don't care about, nor was I involved in the ANU complaint. But it's not an ANU issue at that point as far as I'm concerned. Say it is though, cool. Then I'll purely speak about my own experiences. At least in my experience I was blocked by a clearly involved admin (again, in a way that didn't involve ANU what-so-ever) and there wasn't any way to deal with it either at the time or after the fact. But apparently I should just accept that and not talk about it because I was topic banned from ANU a year later. Even though again, it had absolutely nothing to do ANU. Right. --Adamant1 (talk) 23:22, 9 January 2025 (UTC)
- No other website deals with problems in the same super pedantic, needlessly personal way that things are constantly discussed on here. There's been a ton of discussions over the years about admins unilaterally using their privileges to push their own personal opinions or way of doing things. Nothing is ever done about it though because this is exactly how every single conversation goes. All I'm asking for here is for there to be minor, basic standards for when admins are allowed to unilaterally block someone. But lets not do that even though its clearly a problem and leading people to not contribute to the website just because I'm topic banned from an unrelated area that has absolutely nothing to do with this. Adamant1 (talk) 01:41, 10 January 2025 (UTC)
- BTW with Enhancing999, I had gotten into it with him over the exact same thing that he was blocked for a couple of days before he was blocked. Its absolutely within my right to discuss something that I was involved in and its not my problem that other people decided to escalate things or take it to a different forum after that. My bad for mentioning a conflict that I was personally involved though. I wasn't aware it would be such a big no no. --Adamant1 (talk) 01:48, 10 January 2025 (UTC)
- I'm not trying to tell you to stop talking about that issue, I just don't want you to get blocked. Friends don't let friends get sanctioned, as Barkeep49 put it. All the Best -- Chuck Talk 04:49, 10 January 2025 (UTC)
- OK. Fair enough. --Adamant1 (talk) 05:07, 10 January 2025 (UTC)
- I'm not trying to tell you to stop talking about that issue, I just don't want you to get blocked. Friends don't let friends get sanctioned, as Barkeep49 put it. All the Best -- Chuck Talk 04:49, 10 January 2025 (UTC)
- BTW with Enhancing999, I had gotten into it with him over the exact same thing that he was blocked for a couple of days before he was blocked. Its absolutely within my right to discuss something that I was involved in and its not my problem that other people decided to escalate things or take it to a different forum after that. My bad for mentioning a conflict that I was personally involved though. I wasn't aware it would be such a big no no. --Adamant1 (talk) 01:48, 10 January 2025 (UTC)
- No other website deals with problems in the same super pedantic, needlessly personal way that things are constantly discussed on here. There's been a ton of discussions over the years about admins unilaterally using their privileges to push their own personal opinions or way of doing things. Nothing is ever done about it though because this is exactly how every single conversation goes. All I'm asking for here is for there to be minor, basic standards for when admins are allowed to unilaterally block someone. But lets not do that even though its clearly a problem and leading people to not contribute to the website just because I'm topic banned from an unrelated area that has absolutely nothing to do with this. Adamant1 (talk) 01:41, 10 January 2025 (UTC)
Require VRT permission from nude models
[edit]There are currently many cases of nude models requesting the deletion of photos where they are visible. We do not have a clear policy how to handle such cases and every solution has problems. I want to propose a new process to minimize this problem for future uploads.
I would propose a new guideline like the following:
"Photos of nude people need explicit permission from the model verified through the VRT process. This applies to photos of primary genitalia and photos of identifiable people in sexually explicit/erotic poses also if only partial nude. This also applies to photos form external sources with an exception for trustworthy medical journals or similar. This does not apply to public nudity at protests, fairs and shows where photographing was allowed. For photos of such events only the regular rules on photos of identifiable people apply. This applies to all photos uploaded after Date X. Within the process the people are reminded that the permission is irrevocable. Having such permission does not automatically put the photo within the scope."
As I think that would not be more than a hand full of cases per month I think this could be handled by the VRT team. If this new task is a problem for the VRT we could also ask if the T&S team could help in this sensitive area. GPSLeo (talk) 10:09, 11 January 2025 (UTC)
- I thought the GDPR right "to be forgotten" makes a "irrevocable" model release impossible? What would such a guideline mean for fotos from pride parades? At pride parade there are regularly people with visible primary genitalia. C.Suthorn (@Life_is@no-pony.farm - p7.ee/p) (talk) 11:33, 11 January 2025 (UTC)
- I think there is no higher court decision on "model contract" vs. "right to be forgotten" but I would assume that the model contract is the superior right. If otherwise we would already have cases of known movies where some actors got themself removed from the movie. I will add a sentence on public nudity. I had this in mind but then forgot it when writing the draft. GPSLeo (talk) 11:41, 11 January 2025 (UTC)
Before putting new tasks on the VRT, please consider to speak with the VRT. Their current policy is not to process any personality rights releases, which also included model contacts, not least because they are unable to reasonably verify such releases. --Krd 11:50, 11 January 2025 (UTC)
- I am aware that this is often more complicated than for copyright. But I think it is better to make a "delete if not verified policy" instead of keep everything and handle all the removal requests they also require identity confirmation. GPSLeo (talk) 12:07, 11 January 2025 (UTC)
- How many removal requests have there been in the last 2 years? Krd 12:29, 11 January 2025 (UTC)
- I do not know how many cases were handled privately by VRT and Oversight but for the cases starting as regular deletion requests I would estimate around ten to twenty cases in the last two years. GPSLeo (talk) 12:55, 11 January 2025 (UTC)
- No offence, but can we make sure we are addressing a problem, and not a non-problem, before we make such expensive approach? I for sure don't see all such VRT requests, but I think I see at least half of them, and I have no memory of any relevant issue. If they happen, they are mostly such cases where consent initially was given and is going to be revoked later, which is a situation other addressed by the proposal.
- Who is going to ask the oversighters, so that we know what we are talking about? Krd 17:26, 11 January 2025 (UTC)
- I do not know how many cases were handled privately by VRT and Oversight but for the cases starting as regular deletion requests I would estimate around ten to twenty cases in the last two years. GPSLeo (talk) 12:55, 11 January 2025 (UTC)
- How many removal requests have there been in the last 2 years? Krd 12:29, 11 January 2025 (UTC)
The proposal seems sensible to me, as long as the VRT would be actually willing to handle such permissions, see Krd's comment. I would, however, add something exempting historical photographs too (for example, the photographs in Category:19th-century photographs of nude people), or photographs of now deceased people in general (photos taken when the person was alive). In Switzerland, for example, the "right to one's own picture" (Recht am eigenen Bild) basically ends with death, see de:Recht am eigenen Bild (Schweiz) and can't be claimed by family members; in Germany, family members can claim it for up to 10 years after the person's death (per de:Postmortales Persönlichkeitsrecht). Gestumblindi (talk) 14:41, 11 January 2025 (UTC)
- I’d say if a nude model legitimately requests their picture be taken down, we just take it down; requiring VRT for each and every non-historical, non-nudist, non-self-shot photo of a nude person seems tedious and unnecessary. 10-20 cases is a non-trivial amount, but I’d think it’s a pretty small percentage of all the nude photography we host here. Dronebogus (talk) 15:24, 11 January 2025 (UTC)
Something like this may be reasonable, but the considerations raised by User:Gestumblindi and User:Dronebogus are relevant. To list three exceptions I see:
- Historical photos, especially photos that were routinely published in their own era and whose copyrights have now expired. E.g. I cannot imagine doubting appropriate consent on a nude photo of actress Louise Brooks.
- Photos from societies and cultures where what is in the West considered "nudity" is simply considered normal (e.g. places in Africa or Pacific Islands where women do not routinely cover their breasts).
- Photos taken at public events in countries where appearing in public is de facto consent for photography. E.g. the many people who appear naked at the Folsom Street Fair, or Fremont Solstice Parade, or Mardi Gras in New Orleans. It is not practical to get VRT from a person walking by in a parade, nor do I see any need to do so in a situation where they have no legal expectation of privacy.
I would not be surprised if there are other equally strong cases for exceptions. - Jmabel ! talk 19:50, 11 January 2025 (UTC)
- Yes, the part for historical photos should definitely be added and defined in a very broad sense (all photos older then 25? years). The second point is the reason why I made the complicated definition to exclude female breasts in non sexual contexts from the guideline. GPSLeo (talk) 20:37, 11 January 2025 (UTC)
- The proposal was about primary genitalia. Now you introduce secondary gender specific body parts like breasts or a beard. There are societies that forbid a man to show a shaven face. Should we also require VRT permission for images of iranien or afghan men without a beard? C.Suthorn (@Life_is@no-pony.farm - p7.ee/p) (talk) 23:10, 11 January 2025 (UTC)
- The proposal would create very big issues (increase in work for VRT, increase of DRs towards nude pictures) to potentially solve very few (almost non-existent) issues. Christian Ferrer (talk) 08:41, 12 January 2025 (UTC)
- Oppose A solution in search of a problem. The Squirrel Conspiracy (talk) 11:28, 12 January 2025 (UTC)
- Comment I'm very active in VRT and I've never see a case like this -nude models requesting the deletion of photos where they are visible-. I think we can handle it when the moment arrives. --Ganímedes (talk) 22:49, 12 January 2025 (UTC)
- Oppose Unneeded and would cause a increase in deletion requests and a increase in work for VRT Isla (talk) 23:08, 14 January 2025 (UTC)
- Support Assuming Jmabel's suggestions are implemented if it passes. Regardless, this seems like a reasonable proposal and I don't really think the arguments against it are compelling. God forbid the VRT team has to do a couple of more VRT permissions every now and then. --Adamant1 (talk) 03:16, 16 January 2025 (UTC)
- It's not about workload; the comment above was
"Their current policy is not to process any personality rights releases, which also included model contacts, not least because they are unable to reasonably verify such releases."
Unless this issue is adequately addressed, the proposal is a non-starter. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:12, 16 January 2025 (UTC)
- It's not about workload; the comment above was
- Fair enough. I must have missed that. I agree the proposal is probably a non-starter if its not worked out though. --Adamant1 (talk) 13:29, 16 January 2025 (UTC)
- If we see a need for something we are currently not able to do we have to show this to get support from the WMF. The WMF will only help us finding a solution if there is a consensus in the community that there is need for this. We need the community decision that there is a need before we can talk about finding solutions. GPSLeo (talk) 13:41, 16 January 2025 (UTC)
- Fair enough. I must have missed that. I agree the proposal is probably a non-starter if its not worked out though. --Adamant1 (talk) 13:29, 16 January 2025 (UTC)
- Oppose mandatory VRT permission. My proposal is: if a model makes a legitimate request to remove an image, we remove it, no questions asked. Dronebogus (talk) 18:27, 16 January 2025 (UTC)
- As far as your proposal is concerned, I suspect that's more or less the case already, if someone knows who or where to ask - but I'd absolutely support a more substantial proposal to make that an official policy, and to make it better known. Omphalographer (talk) 02:41, 17 January 2025 (UTC)
- Commons:Contact us/Problems mentions info-commons for issues about "Images of yourself". So in a sense, it's already handled in VRT, or at least our documentation says so.
- We have 2 Commons-related community queues in VRT: info-commons mentioned in Commons:Contact us/Problems and permissions-commons described in COM:VRT. The latter page might make it look like permissions-commons=VRT, but that is not true. whym (talk) 10:32, 18 January 2025 (UTC)
- As far as your proposal is concerned, I suspect that's more or less the case already, if someone knows who or where to ask - but I'd absolutely support a more substantial proposal to make that an official policy, and to make it better known. Omphalographer (talk) 02:41, 17 January 2025 (UTC)