Commons talk:AI-generated media

From Wikimedia Commons, the free media repository
Jump to navigation Jump to search

Structured data suggestion[edit]

The policy might also want to mention something about how to model the structured data for these images, for instance by using P31 – instance of with Q96407327 – synthetic media (suggested in the Wikidata Telegram group). Ciell (talk) 11:59, 7 January 2023 (UTC)[reply]

Privacy of living people[edit]

Starting a new section to specifically discuss the "Privacy of living people" section. I don't think the current wording reflects the consensus on Commons, but I'm not sure what to change it to. Suggestions? Nosferattus (talk) 17:55, 19 February 2023 (UTC)[reply]

At the very least, the claim that "there is a certain likelihood that any AI-generated image with a human face in it violates the privacy of a living person" should be backed up by evidence or removed (also given that other hand-wavy probability claims added by the same user turn out to be highly dubious upon closer inspection, see above). Regards, HaeB (talk) 15:16, 20 February 2023 (UTC)[reply]
I am concerned about scenarios like these:
Gnom (talk) 22:19, 22 February 2023 (UTC)[reply]
Just to remind everyone: these might be issues in some countries, but basically are not issues in the U.S. (where our servers are hosted) unless the image is somehow disparaging or is used in a way that implies an endorsement by that person. - Jmabel ! talk 22:39, 22 February 2023 (UTC)[reply]
@Gnom: The section that discusses privacy issues is under the question "Can AI-generated media be lawfully hosted on Commons?". Can you elaborate on what laws you are concerned with? While I share your general concern for the privacy of living people, we need to focus on what the actual legal issues are (and how Commons typically deals with those legal issues). If your concerns are not legal in nature, they should be moved to a different section. Nosferattus (talk) 00:49, 24 February 2023 (UTC)[reply]
On a legal level, this is of course a violation of the GDPR insofar as EU residents are being depicted. Under U.S. law, I am also quite confident that would be illegal to host such images, but others are probably in a better position to assess this. On a non-legal level, I can only say that I would go berserk if an AI were to spit out my face under certain conditions. Gnom (talk) 08:12, 24 February 2023 (UTC)[reply]
@Gnom AI-generated art may show faces of living people who were not asked for their consent to appear in the output because photos or drawings depicting them were used to train the software. Accordingly, there is a certain likelihood that any AI-generated image with a human face in it violates the privacy of a living person. Under the Precautionary Principle, these respective files may be nominated for deletion.
No. There are a significant number of countries where consent is not required, and Commons generally does not impose its own consent requirements unless the photo seems to be undignified.
Ironically, the page includes an AI-generated image with a human face in it (Mona Lisa by Midjourney AI.jpg). Should it be deleted under the precautionary principle? Of course not, but there’s nothing on the page to explain this.
Where we can re-use existing laws, guidelines and policies, we should do so. This page should cover only AI-specific issues, and link to other pages for the other issues. Brianjd (talk) 15:02, 25 February 2023 (UTC)[reply]
In the US (where Commons is hosted), Australia (where I live), and some other countries, it is generally legal to take and publish images of people in public without their consent. The EU and the US have very different attitudes to privacy, and people living in one of these jurisdictions must be very careful to avoid false assumptions about the other. Brianjd (talk) 15:05, 25 February 2023 (UTC)[reply]
On another matter, why focus on faces? Images of people can violate privacy without depicting faces. Brianjd (talk) 15:06, 25 February 2023 (UTC)[reply]
Hi, I agree that the Mona Lisa is a bad example (and I should note that was not the one to put it there). I would prefer the examples I inserted above instead. The left image, assuming(!) it were to actually show the face of a real underage girl, would give her parents a right to sue the Wikimedia Foundation for violating their daughter's privacy, even under U.S. law.
Also, you are of course correct that not only the depiction of a person's face can constitute a violation of their privacy, but that is where I see the main issue. Gnom (talk) 20:17, 4 March 2023 (UTC)[reply]
There is a problem if an image shows an actual person by the actual original image. But... when the AI creates a new image based on a dozen or thousands of original images, and someone says: "That new image looks like person X!", is that a privacy issue? How decides that the image looks like person X? How to determine that; "looking alike" can be the subjective judgement of an individual. I for example believe that Tim Curry looks like Edmond Privat. Ziko van Dijk (talk) 18:59, 5 March 2023 (UTC)[reply]
I would think this is an area where we would not want to host things that are in the gray zones. - Jmabel ! talk 20:49, 5 March 2023 (UTC)[reply]
@Ziko The cases that I am thinking about are those where there is simply no question whether or not the AI-generated image depicts a real person. Gnom (talk) 08:08, 6 March 2023 (UTC)[reply]
@Gnom, can you give an example of what you mean? In which context would there be no question? Ziko van Dijk (talk) 08:13, 6 March 2023 (UTC)[reply]
If the similarity between the AI-generated human face and their real likeness is so close that if you asked 100 people, 99 would say that the two images (the AI-generated image and a photo of the actual human face) depict the same person. That's the scenario I would like to discuss. Gnom (talk) 15:04, 6 March 2023 (UTC)[reply]
You vastly overestimate how skillet your average person are at telling the difference Trade (talk) 21:49, 18 March 2023 (UTC)[reply]

@Gnom: While I agree with you in principle, I think you have a misunderstanding of U.S. law. In the U.S. there are very very few protections for privacy and strong protections for expression. For example, in the U.S. I could go to a public playground, take photos of a 5-year-old child, ask no one's permission or consent, and then post those photos on the internet or in an art show or whatever. Not only that, but I could even take photos of someone through the window of their home, and in many states it would be perfectly legal. As long as there is no nudity involved (which triggers the federal video voyeurism law) or commercial use involved (which triggers rights of publicity), you can violate people's privacy all day long in the U.S. If you don't believe me, read [1]. This is what happens when you enshrine free speech in your constitution, but not privacy. This lack of privacy protection is also what specifically caused Roe v. Wade to recently be overturned in the U.S. Supreme Court. The Court ruled that because there is no actual "right to privacy" in the U.S., Roe v. Wade was invalid and thus abortion can be prohibited by the law. Of course, I don't believe that Commons policy on privacy should be based on whatever is allowed by U.S. law, but we can't pretend that such a policy is based on the law when it isn't. Our policy should be based on basic ethics and respect for human dignity. But we have to convince the Commons community to agree to such principles in this case. It can't just be taken for granted. Nosferattus (talk) 19:38, 18 March 2023 (UTC)[reply]

Since there doesn't seem to be consensus for the contents of this section, I've removed it. I would ideally like to replace it with some sort of guidance we can agree on. Does anyone have suggestions? Nosferattus (talk) 18:46, 19 August 2023 (UTC)[reply]

Uploads by PixelPenguin87[edit]

PixelPenguin87 (talk · contribs)

Might wanna contact the legal team on what to do with these type of AI generated images. This sort of photorealistic AI generated content could potentially be a violation of Commons:CHILDPROTECT Trade (talk) 21:46, 18 March 2023 (UTC)[reply]

@Brianjd, Ricky81682, and Nosferattus: --Trade (talk) 21:53, 18 March 2023 (UTC)[reply]
@King of Hearts: --Trade (talk) 03:28, 19 March 2023 (UTC)[reply]
I fully agree. A WMF Legal clarification would be useful. Wutsje 23:06, 18 March 2023 (UTC)[reply]
See also U93rFh2T (talk · contribs), VibrantExplorer372 (talk · contribs) and BlueMoon2023 (talk · contribs). Wutsje 03:08, 19 March 2023 (UTC)[reply]
Have you tried to seek WMF out for a legal clarification? Trade (talk) 15:49, 21 March 2023 (UTC)[reply]
I don't think we need legal advice to decide just to delete this sort of thing. - Jmabel ! talk 16:35, 21 March 2023 (UTC)[reply]
I didn't wanted to risk going against community consensus. Trade (talk) 20:16, 31 March 2023 (UTC)[reply]

m:Wikilegal/Copyright Analysis of ChatGPT has been published a couple of days ago. It is primarily about text, but it also briefly mentions AI-generated images. whym (talk) 13:54, 24 March 2023 (UTC)[reply]

United Kingdom[edit]

{{PD-algorithm}} contains a note "The United Kingdom provides a limited term of copyright protection for computer-generated works of 50 years from creation", citing this UK Intellectual Property Office page. From this page:

Unlike most other countries, the UK protects computer-generated works which do not have a human creator (s178 CDPA). The law designates the author of such a work as “the person by whom the arrangements necessary for the creation of the work are undertaken” (s9(3) CDPA). Protection lasts for 50 years from the date the work is made (s12(7) CDPA).

I think it might be tricky to ascertain who "the person by whom the arrangements necessary for the creation of the work are undertaken" might be in modern AI generated media (the author of the prompt? The programmer(s) of the software? Both? - because the software itself is certainly "necessary for the creation of the work", and the prompt as well), but it seems that in the UK, AI generated media is protected anyway, even if we might be unsure who's the owner of the rights. The Office also states there:

The UK remains one of only a handful of countries worldwide that provides this protection. Investment in AI has taken place in other countries, such as the United States, which do not provide this type of protection. Some people argue that this protection is not needed, and others that it should be provided differently.

So, I think that this page should be amended, too, in some way, to contain this information that in the UK and in a "handful of countries" (which countries?) there is protection for AI art, and in deletion discussions, I assume that we have to check where the image was generated - if generated in the US or most other countries, it's {{PD-algorithm}}, but if generated in the UK or the "handful of countries" with protection, we must delete it. In the case of Commons:Deletion requests/File:Alice and Sparkle cover.jpg which I decided to keep, I think it's fine, as the author of the prompt per this article is currently living in California, and was using Midjourney, which is based in San Francisco. Gestumblindi (talk) 20:58, 9 June 2023 (UTC)[reply]

It's possible the author didn't knew what other countries he was referring too and simply made a reasonable assumption Trade (talk) 21:05, 9 June 2023 (UTC)[reply]
Well, to remedy the lack of UK information, I made an addition based on this UK Intellectual Property Office information. Gestumblindi (talk) 18:24, 10 June 2023 (UTC)[reply]

A large-scale AI-generated images upload[edit]

Just for the records, in Commons:Deletion requests/Files in Category:AI images created by David S. Soriano I would have decided exactly like Minorax. Per COM:INUSE, it would be hard to argue for deleting files that are actually already in use - and they were added to articles and user pages by various users, not by the uploader -, and this shows that, apparently, these images can be considered useful, but on the other hand, I think there is really a "slippery slope": If we just blanket kept the images, the door would be open for flooding Commons with thousands, tens of thousands - well, why not millions? - of such images, as it's very easy to mass generate this kind of content. Gestumblindi (talk) 09:11, 18 June 2023 (UTC)[reply]

Can we know how stable those usages are, beyond the binary in-use vs not in-use? If it has to be checked manually, I hope there is a better tooling to track usage over time. It looks like File:UFO From Distant Past.png was kept for being in use, but it's not in use at the moment. whym (talk) 03:15, 19 June 2023 (UTC)[reply]
Given that the remaining files appear mostly on user pages, I'd expect that any mainspace project usage was being washed out days or weeks later. File:UFO From Distant Past.png is the seventh result in a Commons search for "ufo" and certainly the most striking and exciting depiction, but if anyone added it to a project article about UFOs it would (as an illustration of no particular UFO case, drawn in part or in full by a hallucinating AI) be reverted or replaced with something more useful. The same goes for Soriano's cubism uploads: a superficially attentive editor might search Commons for "cubism" and add one to an article as an example of that style, but an editor paying more attention would replace it with a free image by a notable and human artist. Belbury (talk) 10:59, 19 June 2023 (UTC)[reply]
This would be far easier to solve if anyone could come in contact with David Trade (talk) 01:38, 27 June 2023 (UTC)[reply]

Can the prompts themself be copyrighted?[edit]

I am seeing a trend where users have two separate copyright tags. One for the AI prompt they used and one for the AI generated image that came from the output. Should we allow this> Trade (talk) 21:05, 28 June 2023 (UTC)[reply]

And plenty of others i cant currently remember. Trade (talk) 01:21, 29 June 2023 (UTC)[reply]

Those are probably on the borderline of copyrightability (the first more likely than the second, in my view). Given that in both cases they are specifically disavowing copyright on the prompt and placing it in the public domain, it's hard to see anything objectionable. I think I'd object to someone putting something here, claiming copyright on their prompt, and insisting that using even the prompt requires conforming to a license. - Jmabel ! talk 02:00, 29 June 2023 (UTC)[reply]
  • Are there any of these where the prompt is being released under a licence? The examples here are really the opposite of this: the uploader has put the prompt deliberately into [sic] the public domain, to clearly disavow any licensing. Andy Dingley (talk) 02:05, 29 June 2023 (UTC)[reply]
I think it's fine if the uploader is explicitly declaring the prompt public domain (or CC0). Other cases would need a closer look. Nosferattus (talk) 01:50, 16 July 2023 (UTC)[reply]
If the author have the right to license a prompt as PD then they have the right to license the prompt under as CC-BY-SA. Allowing the former but not the latter is essentially saying that they only own the copyright to the prompt as long as it is convenient for Commons Trade (talk) 02:12, 16 July 2023 (UTC)[reply]

Prompts need to be disclosed[edit]

It seems to me that the main way AI-generated images are of use is precisely as examples of AI-generated images. To that end, I propose that we should (going forward) require that the file page text for all AI-generated images include, at a minimum:

  1. what AI software was used
  2. what prompt was used

I have no problem with some images being grandfathered in, and of course exceptions need to be made for images that are, for one or another reason, notable in their own right. There might even be a reason for some other exception I'm not thinking of, but I've been seeing floods of low-value AI-generated images, often not marked as such, often involving sexualized images of young women or girls. If those last are coming from prompts that do not ask for anything of the sort, then that itself would be important to document. If, as I suspect, they come from prompts that ask for exactly that, then they really don't belong here, any more than someone's personal photos of themselves and their friends.

And, yes, I am discussing two separate (but related) issues here, disclosure and scope. - Jmabel ! talk 16:53, 28 July 2023 (UTC)[reply]

For a starter, we need a category for AI images without prompts so we can keep track of the issue Trade (talk) 18:56, 19 August 2023 (UTC)[reply]
(which Trade introduced with Category:AI images generated using unidentified prompts). - Jmabel ! talk 23:55, 19 August 2023 (UTC)[reply]

Question about files that are most likely AI-generated, but without definitive proof[edit]

Hello, I was wondering what best practice would be in cases where a file is found online that looks very clearly like an AI-generated image, but the creator either did not specify that they were created with AI or they claimed it to be their own work. Examples include this film poster (notice the pupils and hair), as well as this poster (notice the inconsistent teeth, the bizarre foot, the inconsistent windows on buildings, and the weird birds in the sky). Should we allow them to be uploaded under the presumption that they are AI, or should we assume that they're copyrighted unless they're explicitly stated to be AI? Di (they-them) (talk) 00:16, 20 August 2023 (UTC)[reply]

Even if an image contains a lot of obvious AI artefacts, you'd also have to assume that there was no subsequent creative human editing of that image, which seems unknowable. Belbury (talk) 08:02, 20 August 2023 (UTC)[reply]
There is exactly zero benefit from trying to do original research on whether or not a image was AI generated without the author admitting so. You are just asking to create a huge copyright mess Trade (talk) 22:38, 20 August 2023 (UTC)[reply]

Is the note unnecessary?[edit]

The note "Although most current media-generating programs qualify as machine learning and not true artificial intelligence, the term 'artificial intelligence' is commonly used colloquially to describe them, and as such is the term used on this page" seems unnecessary to me. Media-generating programs are indeed artificial intelligence as well as machine learning. Machine learning is considered a subset of artificial intelligence. Artificial intelligence doesn't only refer to LLMs, and includes DALLE-2, Midjourney, etc. Machine learning might be more precise, but AI isn't incorrect. I would like to hear what other people think about this. Chamaemelum (talk) 18:32, 20 August 2023 (UTC)[reply]

I agree with your assessment. Machine learning is a subset of AI. Nosferattus (talk) 19:49, 20 August 2023 (UTC)[reply]

Real-life[edit]

@Trade: you added the parenthetical phrase in "AI fan art of (real-life) fictional characters", which seems oxymoronic to me. How can something be both "real-life" and "fictional"? - Jmabel ! talk 01:52, 21 August 2023 (UTC)[reply]

Fictional characters that exist outside of the AI generated art in question Trade (talk) 01:54, 21 August 2023 (UTC)[reply]
I hoped the name of the category was enough but unfortunately people keep filling it with images that had nothing to do with fan art. Trade (talk) 01:56, 21 August 2023 (UTC)[reply]
To be fair, the description at the top of Category:AI-generated fictional characters doesn't suggest not to. And a lot of categorisation happens based on the category name alone.
Would Category:AI-generated fan art be a useful subcategory to create? Belbury (talk) 09:13, 21 August 2023 (UTC)[reply]

AI images of real subjects (aka. "deepfakes")[edit]

One subject which this draft doesn't seem to address clearly is the topic of AI images which appear to represent real subjects - e.g. real people, real places, real historical events, etc. These images have the potential to be misleading to viewers, and can cause harm to the project by discouraging the contribution of real images, or by being used as substitutes for real images which are already available.

I'd like to address this as follows. This is intentionally strongly worded, but I feel that it's warranted given the potential for deception:

AI-generated images which contain photorealistic depictions of notable people, places, or historical events have the potential to deceive viewers, and must not be uploaded.

If AI-generated images containing these subjects are used as illustrations, effort should be made to use images which cannot be mistaken for photographs, e.g. by prompting the image generation model to use a cartoon art style.

In a limited number of cases, realistic images containing these subjects may be used as demonstrations of AI image generation or "deepfakes". These images should be watermarked to make it clear to viewers and downstream users of these images that they were machine-generated.

Thoughts? Omphalographer (talk) 23:23, 11 September 2023 (UTC)[reply]

A COM:WATERMARK on a demonstration image significantly reduces any constructive reuse of it. Anybody wanting to reuse a notable fake like File:Pope Francis in puffy winter jacket.jpg in their classroom or book should be able to get that direct from Commons.
These images would benefit from prominent warning templates, though, and perhaps an explicit "Fake image of..." in the filenames. Belbury (talk) 08:25, 12 September 2023 (UTC)[reply]
Why not just add a parameter to the AI template that can be used to indicate whether or not the image depicts a living person? Trade (talk) 10:51, 12 September 2023 (UTC)[reply]
The problem I'm concerned with is reuse of these images outside Wikimedia projects, where the image description certainly won't be available and the filename will likely be lost as well. Photorealistic AI-generated images of recognizable subjects should be fairly rare on Wikimedia projects, and I'm confident that editors can come up with some way of marking them which makes their nature clear without being overly intrusive.
How about the rest? Are we on board with the overall principle? Omphalographer (talk) 19:41, 12 September 2023 (UTC)[reply]
Seems reasonable to me. And an alternative to a watermark in the narrow sense would be a mandatory notice in a border under the photo. - Jmabel ! talk 20:10, 12 September 2023 (UTC)[reply]
No matter the amount of whistles, alarms and whatnot you put up there will always be someone who cant be bothered to read it before posting the image somewhere else Trade (talk) 15:41, 14 September 2023 (UTC)[reply]
Certainly. But if the image itself can tell viewers "hey, I'm not real", then at least it has less potential to mislead. Omphalographer (talk) 17:01, 14 September 2023 (UTC)[reply]
In what manner is that not covered by the current AI-image license template + related categories and description? Trade (talk) 22:41, 23 September 2023 (UTC)[reply]
Because those do not tend to travel with the image itself when it is reproduced. Indeed, if it the image is used incorrectly within a Wikipedia there would be no indication of that unless someone clicks through. - Jmabel ! talk 22:51, 23 September 2023 (UTC)[reply]
Even if you click to expand an embedded image in an article, the full license and disclaimer templates are only visible if you then click through to the full image page. To an unsophisticated user, they might as well not exist. Omphalographer (talk) 00:19, 24 September 2023 (UTC)[reply]
So in short the only realistic solution would be a warning template that appears when someone from a Wiki project click to expand the image Trade (talk) 22:04, 13 October 2023 (UTC)[reply]
@Omphalographer: I think we should change "notable people" to "actual people" and remove "places" (as that seems overly broad and unnecessary to me). Nosferattus (talk) 23:14, 20 September 2023 (UTC)[reply]
We may also want to clarify that this doesn't apply to AI-enhanced photographs. Nosferattus (talk) 23:16, 20 September 2023 (UTC)[reply]
Excellent point on "notable people"; I agree that this policy should extend to any actual person, not just ones who cross some threshold of notability.
The inclusion of "places" was intentional. A synthetic photo of a specific place can be just as misleading as one of a person or event; consider a synthesized photo of a culturally significant location like the Notre-Dame de Paris or Mecca, for instance.
AI-enhanced photographs are... complicated. There's no obvious line dividing photos which are merely "AI-enhanced" and ones which begin to incorporate content which wasn't present in the source photo. For instance, the "Space Zoom" feature of some Samsung phones replaced photos of the moon with reference photos of the moon - this level of processing would probably be inappropriate for Commons photos. Omphalographer (talk) 00:11, 21 September 2023 (UTC)[reply]
@Omphalographer I think there are some legitimate reasons for creating and uploading photorealistic AI images of places, and less danger that they cause harm. For example, an AI generated image of an ice-free Greenland might be useful for a Wikibook discussing climate change. Sure, it could be misleading if used in the wrong context, but it doesn't worry me as much as AI images of people.
So are you suggesting that all AI-enhanced photographs should also be banned? This will probably be the majority of all photographs in the near future, so I wouldn't support that proposal. Nosferattus (talk) 00:32, 20 October 2023 (UTC)[reply]
I'm not suggesting that all AI-enhanced photos should be banned, but that the limits of what's considered acceptable "enhancement" need to be examined. Filters which make mild changes like synthetically blurring the background behind a person's face or adjusting contrast are almost certainly fine; ones which add/remove substantial elements to an image or otherwise dramatically modify the nature of the photo (like style-transferring a photograph into a painting or vice versa) are probably not.
With regard to "places", would you be happier if that were worded as "landmarks"? What I had in mind was synthetic photos of notable buildings, monuments, or similarly specific places - not just any location. Omphalographer (talk) 05:57, 20 October 2023 (UTC)[reply]
 Oppose - Very strongly against this proposal, which would be highly problematic for many reasons and unwarranted censorship.
Agree with Belbury on prominent warning templates, though, and perhaps an explicit "Fake image of..." in the filenames - we should have prominent templates for AI images in general and prominent warning templates for deepfake ones...a policy on file title requirements is something to consider. Prototyperspective (talk) 22:11, 13 October 2023 (UTC)[reply]
 Oppose per Prototyperspective. I fail to see why this issue is fundamentally different from other kinds of images that could be convincingly misrepresented as actual, unaltered photographs depicting real people, real places, real historical events, an issue that is at least a century old (see e.g. w:Censorship of images in the Soviet Union, Category:Manipulated photographs etc).
That said, I would support strengthening existing policies against image descriptions (and files names) that misrepresent such images as as actual photos, whether they are AI-generated, photoshopped (in the sense of edits that go beyond mere aesthetics and change what a general viewer may infer from the image about the depicted person, place etc.) or otherwise altered. That's assuming that we have such policies already - do we? (not seeing anything at Template:Commons policies and guidelines)
PS regarding landmarks: I seem to recall that authenticity issues have been repeatedly debated, years ago already, in context of Wiki Loves Monuments and related contests, with some contributors arguing that alterations like removing a powerline or such that "ruins" a beautiful shot of a monument should not affect eligibility. I do find that problematic too and would support at least a requirement to clearly document such alterations in the file description.
Regards, HaeB (talk) 01:35, 25 October 2023 (UTC)[reply]
 Weak oppose per HaeB. Although I'm sympathetic to the idea of banning deepfake images, I think the proposed wording is too broad in one sense (subjects included) and too narrow in another sense (only addressing AI images). I would be open to a proposal focusing on photo-realistic images of people or events that seem intended to deceive or mislead (regardless of whether they are AI generated or not). Nosferattus (talk) 04:47, 25 October 2023 (UTC)[reply]

Custom template for upscaled images[edit]

This page currently advises adding {{Retouched picture}} to upscaled images, which if used without inserting specific text gives a neutral message of This is a retouched picture, which means that it has been digitally altered from its original version. with no mention of the AI nature of the manipulation.

Would it be useful to have a custom AI-upscale template that puts the image into a relevant category and also spells out some of the issues with AI upscaling (potentially introducing details which may not be present at all in the original, copyrighted elements, etc), the way that {{Colorized}} specifically warns the user that the coloring is speculative and may differ significantly from the real colors? Belbury (talk) 08:19, 4 October 2023 (UTC)[reply]

Prototyperspective (talk) 09:44, 5 October 2023 (UTC)[reply]

I've made a rough first draft of such a template at {{AI upscaled}}, which currently looks like this:

This image has been digitally upscaled using AI software.

This process may have introduced inaccurate, speculative details not present in the original picture. The image may also contain copyrightable elements of training data.

When the template is included on a file page it adds that file to Category:Photos modified by AI per the recommendation at Commons:AI-generated_media#Categorization_and_templates.

Feedback appreciated on what the message should say, and what options the template should take. It should probably always include a thumbnail link to the original image (or an alert that the original is freely licenced but hasn't been uploaded to Commons), and an option to say what software was used, if known, so that the file can be subcategorised appropriately.

It may well be worth expanding this to a generic AI template that also covers restoration and generation, but I'll put this forward for now. --Belbury (talk) 08:21, 12 October 2023 (UTC)[reply]

Could you make a template for AI misgeneration? Trade (talk) 23:41, 18 November 2023 (UTC)[reply]
Would that be meaningfully distinct from your existing {{Bad AI}} template? Belbury (talk) 19:39, 19 November 2023 (UTC)[reply]

Wikimedia Foundation position on AI-generated content[edit]

The Wikimedia Foundation recently submitted some comments to the US Copyright Office in response to a Request for Comments on Artificial Intelligence and Copyright. Many of the points made by the Foundation will likely be of interest here, particularly the opening statement that:

Overall, the Foundation believes that generative AI tools offer benefits to help humans work more efficiently, but that there are risks of harms from abuse of these tools, particularly to generate large quantities of low-quality material.

File:Wikimedia Foundation’s Responses to the US Copyright Office Request for Comments on AI and Copyright, 2023.pdf

Omphalographer (talk) 20:06, 9 November 2023 (UTC)[reply]

I wonder if there was a specific DR that made thw Foundation concerned about low quality spam. Or maybe someone just complained to staff staff? Trade (talk) 23:38, 18 November 2023 (UTC)[reply]

Interesting[edit]

https://twitter.com/Kyatic/status/1725120435644239889 2804:14D:5C32:4673:DAF2:B1E3:1D20:8CB7 03:31, 17 November 2023 (UTC)[reply]

This is highly relevant - thanks, whoever you are! Teaser:

This is the best example I've found yet of how derivative AI 'art' is. The person who generated the image on the left asked Midjourney to generate 'an average woman of Afghanistan'. It produced an almost carbon copy of the 1984 photo of Sharbat Gula, taken by Steve McCurry.

If you don't have a Twitter account, you can read the thread at https://nitter.net/Kyatic/status/1725120435644239889.
Omphalographer (talk) 03:59, 17 November 2023 (UTC)[reply]
Hypothetically would we be allowed to upload the AI photo here? Trade (talk) 23:35, 18 November 2023 (UTC)[reply]
No? The whole discussion is about how it's a derivative work of the National Geographic photo. Omphalographer (talk) 02:15, 19 November 2023 (UTC)[reply]
I couldn't find clear info regarding artworks that look very close to non-CCBY photographs at Commons:Derivative works. This particular image may be fine, it's not prohibited just because the person looks similar to an actual person who was photographed and that photograph was the 'inspiration' to the AI creator.
That AI images look very similar to an existing photograph is overall exceptional and depends on issues with training data, parameters/weighting, and the prompts. Moreover, it's possible that this was caused on purpose to make a point or that they had put an extreme weight on high-valued photographs for cases like this while there are only few images of women from Afghanistan in the training data...more likely though the AI simply does not 'understand' (or misunderstands) what is meant by "average" here.
img2img issues
The bigger issue is that you can use images as input images and let AI modify them according to your prompt (example of how this can be really useful). This means some people may upload such an image without specifying the input image so people can check whether or not that is CCBY. If strength of the input image is configured to be e.g. 99% the resulting image would look very similar. I think there should be a policy that when you upload a AI-generated image via img2img, you should specify the input image. Prototyperspective (talk) 11:22, 19 November 2023 (UTC)[reply]
If a human had created that image, we would certainly delete it as a plagiaristic copyvio. I see no reason to treat it more favorably because the plagiarist is a a computer program. - Jmabel ! talk 18:48, 19 November 2023 (UTC)[reply]
I don't think so. I don't see much use of discussing this particular case and only meant to say that the derivative works does not really have info on this but I think artistic works that show something that has previously been photographed are allowed. Or are artworks of the Eiffel tower not allowed if the first depiction of it is a photograph that is not CCBY? Prototyperspective (talk) 18:58, 19 November 2023 (UTC)[reply]
COM:BASEDONPHOTO is the relevant Commons policy for a drawing based on a single photograph: it requires the photographer's permission. I too would see no copyright difference between a human sketching a copy of a single specific photograph and an AI doing the same thing digitally. Belbury (talk) 19:19, 19 November 2023 (UTC)[reply]
Thanks, a link to this policy was missing here so far. I don't see how the issue of photographing things like the Eiffel tower to copyright them away is addressed though. In this case, a person was photographed. I object to the notion that if the first photograph of a person, animal, object, or whatever is not in the public domain, it also can't be drawn under CCBY. I too would not see a copyright difference between a human sketching of a single photograph and an AI doing the same thing digitally. That is what img2img is, the case above is not based on a single image but many images, including many images of women. It would never work if it was just on one image. Prototyperspective (talk) 22:01, 19 November 2023 (UTC)[reply]
The case above is not based on a single image but many images, including many images of women... I'm not convinced. The results shown look very much like they are based primarily on the National Geographic photo, possibly because there were many copies of it in the training data. Omphalographer (talk) 22:18, 19 November 2023 (UTC)[reply]
@Prototyperspective: "the notion that if the first photograph of a person, animal, object, or whatever is not in the public domain, it also can't be drawn under CCBY." That's a straw-man argument, as is your Eiffel Tower example. You are refuting a claim that no one is making. While we have little insight into the "mind" of a generative AI system, I think we can reasonably conclude that if the AI had not seen that particular copyrighted image or works derived from it, then the chance is vanishingly small that it would have produced this particular image. And that is the essence of plagiarism. - Jmabel ! talk 01:06, 20 November 2023 (UTC)[reply]
You may be misunderstanding COM:BASEDONPHOTO. It isn't saying that once somebody takes a photo of a subject, they have control over anyone who chooses to sketch the same subject independently in the future. It only applies to someone making a sketch using that photo alone as their reference for the subject.
The "many images" point doesn't seem very different from how a human would approach the same copying process. The human would also be applying various internal models of what women and hair and fabric generally look like, when deciding which details to include and omit, and which textures and styles to use. It would still result in a portrait that had been based closely on a specific existing one, so would be infringing on the original photographer's work. Belbury (talk) 13:01, 20 November 2023 (UTC)[reply]
Now there are many good points here.
I won't address them in-depth or make any statements in regards to whether I agree with these points and their conclusion. Just a brief note on an issue & unclarity: what if that photo is the only photo of the organism or object? Let's say you want to draw an accurate artwork of an extinct animal photographed once where you'd orient by/use the photo – I don't think current copyright law finds you are not allowed to do so. In this case, I think this photo is the only known photo of this woman whose noncopyrighted genetics further emphasize her eyes making a certain noncopyrighted facial expression. Prototyperspective (talk) 13:18, 20 November 2023 (UTC)[reply]
@Prototyperspective: I am going to assume good faith, and that you are not just arguing for the sake of arguing, but this is the last time I will respond here.
  • If there is exactly one photo (or other image) of a given organism or object, and it is copyrighted, and you create a work that is clearly derivative of it in a degree that is clearly plagiaristic, then most likely you are violating copyright. Consider the Mona Lisa. We don't have any other image of that woman. If it were a painting recent enough to still be in copyright, and you created an artwork that was nearly identical to the Mona Lisa, you'd be violating Leonardo's [hypothetical] copyright.
  • For your "extinct animal" case: probably the way to create another image that did not violate copyright would be to imagine it in a different pose (based at least loosely on images of a related species) and to draw or otherwise create an image of that. But if your drawing was very close to the only known image, and that image was copyrighted, you could well be violating copyright.
  • Again: the user didn't ask the AI to draw this particular woman. They asked for "an average woman of Afghanistan," and received a blatant plagiarism of a particular, iconic photo. Also, you say, "I think this photo is the only known photo of this woman." I suppose that may be an accurate statement of what you think, but it also tells me you have chosen to listen to your own thoughts rather than do any actual research. It is not the only photo of Sharbat Gula, nor even the only published photo of her. Other photos from that photo session when she was 12 years old were published (though they are less iconic) and I have seenn at least two published photos of her as an adult (one from the 2000s and one more recent). I suspect there are others that I have not seen. {[w|Sharbat Gula|She's had quite a life}} and now lives in Italy.
Jmabel ! talk 21:16, 20 November 2023 (UTC)[reply]
This exact issue is described in Commons:AI-generated media#Copyrights of authors whose works were used to train the AI. It isn't discussed in other Commons policies because those documents were generally drawn up before AI image generation was a thing. Omphalographer (talk) 19:42, 19 November 2023 (UTC)[reply]
Similarly to the issues presented in the Twitter/X thread, there is a lawsuit of a group of artists against several companies (incl. Midjourney and Stability AI), where a similar concern is presented (i.e. AI-generated media taking precedence above directly related images and works). I think this, among other things, is important to consider when deciding what scope Commons has in regards to AI-generated media. EdoAug (talk) 12:57, 9 December 2023 (UTC)[reply]

Archiving[edit]

I would like to set up archiving fo this talk page to make active discussion more clearly visible. What do you think of a 8 month threshold of inactivity for now? By my estimation that would archive about a half or less of this talk page. Of course, the setting can be revisited later. whym (talk) 09:21, 25 November 2023 (UTC)[reply]

i made one with 300d, which should let discussions stay around for about 1 year. :) --RZuo (talk) 14:35, 25 November 2023 (UTC)[reply]
I think one year is by far not long enough to keep discussions. This should be much longer. It needs to be possible to figure out why a certain rule was implemented and what the substantiation was. JopkeB (talk) 04:41, 9 December 2023 (UTC)[reply]
@JopkeB: It's not like you can't look at the archive. - Jmabel ! talk 08:20, 9 December 2023 (UTC)[reply]
Thanks, User:Jmabel, I did not understand that "let discussions stay around for about 1 year" means that discussions stay on this talk page for about a year and then were moved to the archive. Of coarse this is OK. JopkeB (talk) 09:44, 9 December 2023 (UTC)[reply]
I think 1 year was a good start. When I posted this,[2] the talk page size was 117k bytes. Now it's 119k bytes. Can we shorten the threshold to 8 month? Usually, busy pages have (and I would say, need) more frequent archiving. whym (talk) 11:58, 5 January 2024 (UTC)[reply]

Something I would actively like us to do involving AI images[edit]

I think it would be very useful to track the evolution of generative AI by giving some specific collection of prompts identically to various AI engines and see what each produces, and repeating this experiment periodically over the course of years to show how those engines evolve. Now that would clearly be within scope, if we don't trip across copyright issues of some sort. - Jmabel ! talk 21:55, 10 December 2023 (UTC)[reply]

No need to highlight that being more clearly within scope here. I don't think you thought long about all the many potential and already existing use-cases of AI generated media especially once these improve. At some point with enough expertise you possibly could generate high-quality images of anything you can imagine matching quite closely to what you intended to depict (if you're skilled at prompting & modifying); how people don't see a tremendous potential in that is beyond me.
One part of that has already been done. However, that one implementation only compares one specific prompt at one point in time.
It would also be interesting to see other prompts such as landscape and people scenes prompts as well as how the generators improve on known issues such as misgenerated hands or basic conceptual errors (where e.g. people looking exactly the same are generated multiple times instead of only once as prompted). I think instead of uploading many images of the same prompt using different styles (example) it would be best to upload large collages (example) that include many styles/images at once. Since there is such a large number of styles, such applications are not as valuable as some other ones where one or a few images are enough and may even close a current gap. Prototyperspective (talk) 14:46, 13 December 2023 (UTC)[reply]
For collages/montages we still prefer to also have each individual image available in a file of its own. - Jmabel ! talk 19:03, 13 December 2023 (UTC)[reply]

More food for thought on AI-generated content[edit]

https://www.404media.co/facebook-is-being-overrun-with-stolen-ai-generated-images-that-people-think-are-real/

Omphalographer (talk) 18:02, 18 December 2023 (UTC)[reply]

I think what they're posing there is more of a moral than a copyright question. What is being "stolen" is the concept of "man with wood-carved dog". Possibly none of the AI-generated images presented there is close enough to the original photos (also shown) to be considered a derivative work (I'm not quite sure, but the general characteristics of a bulldog or a German Shepherd aren't copyrighted, and the AI-works could be seen as just another interpretation of the concept - and mere concepts aren't copyrighted). But that doesn't mean that the moral question is uninteresting or unimportant. It's something we should consider when we talk about what we want to host on Commons. I previously advocated for keeping AI-generated content here only if it's in use in a Wikimedia project, but this makes me think: Imagine that Wikimedia projects start using AI-generated images of this kind to show something they otherwise couldn't show for copyright reasons? Thanks to fair use, some projects like English-language Wikipedia can show a lot more than others, for example, en:The Empire of Light does contain the works of Magritte under the fair use policy. But as German-language Wikipedia doesn't accept fair use (due to the absence of that legal provision in German-language countries), de:Das Reich der Lichter only links to images and doesn't show them directly. Well, now a German-language Wikipedian could tell the AI to generate a "painting of a house surrounded by trees at night, but under a daylight sky" and the result would probably be a "painting" that is similar to Magritte's works in its concept. It could then be used with a caption like "AI-generated approximation of Magritte's Empire of Light series", maybe without being a copyright violation - but I think we wouldn't want that? Gestumblindi (talk) 23:45, 18 December 2023 (UTC)[reply]
The generated images in question here go beyond copying a "concept". They're being generated as fairly close replicas of specific, identifiable source photos; the article has examples.
Whether an "approximation" of a painting would be acceptable is questionable. If it's a close enough approximation, especially if it's imitating a specific work of art, that's likely to push it over the edge into being a derivative work. It's also questionable whether those projects would consider such an image educationally useful. Omphalographer (talk) 00:20, 19 December 2023 (UTC)[reply]
I see now that some of the images are indeed very close to some of the originals (including virtually the same trees in the background, for example), but for others I would still say that they follow the concept without necessarily being a derivative work for copyright purposes. It would be legal (though not terribly original) to sculpt your own wooden dog and pose in a similar way as Michael Jones does. The very first image on the page, for example, has a background that is completely different from all of the photos by Jones shown, the dog is "sculpted" very differently, and the man looks nothing like Michael Jones. Gestumblindi (talk) 01:19, 19 December 2023 (UTC)[reply]
"but I think we wouldn't want that?" It's not Commons responsibility to decide what images German Wikipedia users can use anymore than it's Commons responsibility to pick what admins they should have Trade (talk) 18:29, 20 December 2023 (UTC)[reply]
I meant "we" more generally in the sense of the Wikimedia communities, not just Commons. I would strongly suspect that German-language Wikipedia's community would also be against such an approach. Gestumblindi (talk) 19:53, 20 December 2023 (UTC)[reply]

Latest uploaded AI-generated files[edit]

Is there any way to see them? 2804:14D:5C32:4673:CC8C:1445:1D43:194B 17:58, 27 December 2023 (UTC)[reply]

Searching for the name of software and sorting by recency is the best simple way.
WMC needs walls of images sortable by recency that also shows images of subcats. All AI-images should be in a subcat of Category:AI-generated media. Petscan can be helpful too but seems dysfunctional most of the time. Prototyperspective (talk) 18:16, 27 December 2023 (UTC)[reply]
Of course we have no easy way to search for AI-generated files that aren't properly identified as such. If AI / the name of the software isn't mentioned, it's difficult. Gestumblindi (talk) 19:41, 27 December 2023 (UTC)[reply]
Either by context or artstyle... File:Prinssi Leo.jpg and sometimes, you just can´t tell... File:Gao ke qing in uniform.png hf. Alexpl (talk) 17:37, 2 January 2024 (UTC)[reply]
"Prinssi Leo" is obviously AI-generated; I've just nominated a batch of images by that uploader for deletion (Commons:Deletion requests/Files uploaded by SKoskinenn). "Gao ke qing" looks like a low-quality photo but is likely out of scope anyway. Omphalographer (talk) 17:46, 2 January 2024 (UTC)[reply]
But vaguely eductional stuff, even if most likely an AI work, stays in. Like File:வானவில்-அணிவகுப்பு V4 compressed.pdf and it´s fellow drawings. That is kind of a problem. Alexpl (talk) 21:04, 3 January 2024 (UTC)[reply]

File:Crying robot yelled at by newspaper, dall-e 3.pngAlexpl (talk) 07:18, 12 January 2024 (UTC)[reply]

Which, besides anything else, has two unrelated images, one uploaded over the other. - Jmabel ! talk 16:09, 12 January 2024 (UTC)[reply]
Well, it may look like the guy who did the en:I, Robot (film) artwork did way too many depressants, but at least it is modestly sized with 1.3 MB, compared to some of the "true" stuff we have got in tif-format. Alexpl (talk) 08:13, 17 January 2024 (UTC)[reply]

File:Jose Maria Narvaez.jpg and File:Weugot photo.jpg, last one not marked as AI, but looks like work by "pixai.art". Alexpl (talk) 20:42, 18 January 2024 (UTC)[reply]

Thanks for pointing these out. I've nominated both for deletion - the first because it's ahistorical, the second (along with a few others) because it's unused personal artwork. Omphalographer (talk) 21:04, 18 January 2024 (UTC)[reply]
Not to go off on conspiracy theories or anything, but it's interesting that both of those files were uploaded today by new users who don't have any prior edits. --Adamant1 (talk) 22:12, 18 January 2024 (UTC)[reply]
I don't think there's a connection, beyond that they're both recent uploads.
The uploader of File:Jose Maria Narvaez.jpg also uploaded a number of other AI-generated images of historical figures; I've brought the issue up with them on enwiki. Omphalographer (talk) 00:30, 19 January 2024 (UTC)[reply]
File:Count dracula ai-art.jpg Alexpl (talk) 10:51, 19 January 2024 (UTC)[reply]

File:King Robert III.webp, File:Emperor Robert.webp and, even more concerning, File:David H Anderson - Portrait of George Washington Custis Lee 1832-1913 1876 - (MeisterDrucke-1182010).jpg + all the other stuff by Dboy2001. Alexpl (talk) 08:17, 19 January 2024 (UTC) File:Moplah Revolt guerilla army.jpg, File:Malbar Moplahs.jpg - claimed to be by painter "Edward Hysom" - but I was unable to verify. Alexpl (talk) 11:17, 6 February 2024 (UTC)[reply]

File:OIG1.IjIC.jpg and maybe File:OIG2 (1).png Alexpl (talk) 18:30, 8 February 2024 (UTC)[reply]

File:Robotica-industrial.jpg Alexpl (talk) 21:16, 8 February 2024 (UTC)[reply]

Various Putin impressions: File:Hang noodles on the ears 1.png - File:Hang noodles on the ears 2.png - File:Hang noodles on the ears 3.png Alexpl (talk) 20:09, 10 February 2024 (UTC)[reply]

I'd actually prefer to use these images over other AI-generated images of Putin - it's obvious that they aren't real, so there's less risk of them being used in place of real images. Omphalographer (talk) 21:31, 10 February 2024 (UTC)[reply]
I don't know why arbitrary AI images are posted here. They are dealt with such as being quickly categorized so that they can be easily excluded or speedily deleted when they claim to be made by some painter but weren't.
All of this is like uploaders trying to make things appear problematic to build the impression that they are when they actually aren't. For 'risk of being used in place of real images', things like moving files to "…(AI-generated)" file-titles is one possibility but that's yet another thing that isn't a problem at all and if it was then the problem is not the file being here but something else missing such as 1) a way to view all file-uses of all files in "AI-generated images of real people" 2) somebody using that / some report showing that. Another thing needed not just for AI images but also for other issues are bots or scripts that do things like image reverse search to tag likely copyvios as well as AI images not yet categorized into "AI-generated media" (currently the number of AI images not yet in that cat is very small). Prototyperspective (talk) 22:08, 10 February 2024 (UTC)[reply]
I will never ever add that category to anything. You can do it - or program a bot to do it. (oh, wait...) Alexpl (talk) 11:02, 11 February 2024 (UTC)[reply]
So what is your intention when you post these filenames to a talk page section titled "Latest uploaded AI-generated files"? From the lack of AI categorisation at the time you post them, are you asking other editors to help you judge whether or not they're AI-generated? Belbury (talk) 13:30, 11 February 2024 (UTC)[reply]
To show that "we" are not up to the task of hosting AI-works - unless the original uploaders kindly categorize them correctly. Which they don´t do in many cases. Alexpl (talk) 19:47, 11 February 2024 (UTC)[reply]

File:Bibicrouline.jpg Alexpl (talk) 20:13, 10 February 2024 (UTC)[reply]

Possible alternative/additional text for this page[edit]

The following was my summary of recent discussion at Commons:Village pump (based loosely on an earlier draft by [[User:JopkeB]).


1) Licensing: Commons hosts only images that are either public-domain or free-licensed in both the U.S. and their country of origin. We also prefer, when possible, for works that are in the public domain in those jurisdictions to also offer licenses that will allow reuse in countries that might allow these works to be copyrighted. As of the end of 2023, generative AI is still in its infancy, and there are quite likely to be legislation and court decisions over the next few years affecting the copyright status of its outputs.

As far as we can tell, the U.S. considers any work contribution of a generative AI, whether that is an enhancement of an otherwise copyrightable work or is an "original" work, to be in the public domain. That means that if a work by a generative AI is considered "original" then it is in the public domain in the U.S., and if it is considered "derivative" then the resulting work has the same copyright status as the underlying work.

However, some countries (notably the UK and China) are granting copyrights on AI-generated works. So far as we can tell, the copyright consistently belongs to the person who gave the prompt to the AI program, not to the people who developed the software.

The question of "country of origin" for AI-generated content can be a bit tricky. Unlike photographs, they are not "taken" somewhere in particular. Unlike content from a magazine or book, they have no clear first place of publication. We seem to be leaning toward saying that the country of origin is the country of residence of the person who prompted the AI, but that may be tricky: accounts are not required to state a country of residence; residence does not always coincide with citizenship; people travel; etc.

Consequently, for AI-generated works:

a) Each file should carry a tag indicating that it is public domain in those countries that do not grant copyrights for AI-generated art.
b) If its country of origin is one that grants copyrights for AI-generated art, then in addition to that tag, license requirements are the same as for any other copyrighted materials.
c) If its country of origin is one that does not grant copyrights for AI-generated art, then we [require? request? I lean toward require] an additional license to cover use in countries that grant copyrights for AI-generated art.

For AI-enhanced works, the requirements are analogous. We should have a tag to indicate that the contribution of the AI is public domain in those countries that do not grant copyrights for AI-generated art, and that in those countries the copyright status is exactly the same as that of the underlying work. We would require/request the same additional licenses for any copyrightable contribution as we do for AI-generated work. In most cases, {{Retouched}} or other similar template should also be present.

2) Are even AI-generated "original" works derivative? There is much controversy over whether AI works are inherently all derivative, whether derived from one or a million examples, and whether the original works are known or not. Files only can be deleted for copyright infringement when there are tangible copyright concerns such as being a derivative work of a specific work you can point to.

Most currently available AI datasets include stolen images, used in violation of their copyright or licensing terms. Commons should not encourage the production of unethically produced AI images by hosting them.

AI datasets may contain images of copyrighted subjects, such as buildings in non-FOP countries or advertisements. Can we say that if, for example, a building in France is protected by copyright, an AI-generated image of that building would be exactly as much of a copyright violation as a photo of that building? Seems to me to be the case.

3) Accuracy: There is zero guarantee that any AI-generated work is an accurate representation of anything in the real world. It cannot be relied upon for the accurate appearance of a particular person, a species, a place at a particular time, etc. This can be an issue even with works that are merely AI-enhanced: when AI removes a watermark or otherwise retouches a photo, that retouching always involves conjecture.

4) Scope: We only allow artworks when they have a specific historical or educational value. We do not allow personal artworks by non-notable creators that are out of scope; they are regularly deleted as F10 or at DR. In general, AI-generated works are subject to the same standard.

5) Negative effects: AI-generated images on Commons can have the deleterious effect of discouraging uploaders from contributing freely licensed real images of subjects when an AI-generated version exists, or of leading editors to choose a synthetic image of a subject over a real one. As always, we recommend that editors find, upload and use good images, an it is our general consensus that an AI-generated or AI-enhanced image is rarely better than any available image produced by more traditional means.

That said, there are good reasons to host certain classes of AI images on Commons. In decreasing order of strength of consensus:

  1. Images to illustrate facets of AI art production.
    clearly there would need to be a decision on how many images are allowed under this rubric, and what sort of images.
  2. Use of ethically-sourced AI to produce heraldic images that inherently involve artistic interpretation of specifications.
  3. Icons, placeholders, diagrams, illustrations of theoretical models, explanations of how things work or how to make something (for manuals, guides and handbooks), abstracted drawings of for instance tools and architectural elements, and other cases where we do not need historical accuracy.
  4. For enhancing/retouching images, improving resolution and source image quality, as long as the original image stays on Commons, so the enhanced one gets a different filename and there should be link to the original image in the image description. AI-based retouching should presumably be held to the same standards as other retouching.
  5. Because Commons generally defers to our sister projects for "in use" files, allow files to be uploaded on an "as-needed" basis to satisfy specific requirements from any and all other Wikimedia projects. Such files are in scope on this basis only as long as they are used on a sister project. We will allow some time (tentatively one week) after upload for the file to be used.
    The need to allow slack for files to be used on this basis will raise some difficulties. We need to allow for a certain amount of good-faith efforts to upload such images that turn out not all to be used, but at some point if a user floods Commons with such images and few or none are used this way, that needs to be subject to sanctions.
  6. Our usual allowance for a small number of personal images for use on user and talk pages should work more or less the same for AI-generated images as for any other images without copyright issues, as long as their nature is clearly labeled. E.g. an AI-generated image of yourself or an "avatar" for your user page; a small number of examples of AI-generated works where you were involved in the prompting. (In short, again "same standard as if the work were drawn by an ordinary user.")
  7. (Probably no consensus for this one, but mentioning it since JopkeB did; seems to me this would be covered by the "Scope" section above, "same standard as if the work were drawn by an ordinary user.") For illustrating how cultures and people could have looked like in the past.

While there is some disagreement as to "where the bar is set" for how many AI-generated images to allow on Commons, we are at least leaning toward all of the following as requirements for all AI-generated images that we host:

  1. All files must meet the normal conditions of Commons. Files must fall within Commons' scope, including notability, and any derivative works must only public-domain and free-licensed materials. File pages must credit all sources.
  2. AI-generated or AI-enhanced images must be clearly recognizable as such:
    1. There should be a clearly visible prominent note about it being an AI image, mentioning that it is fake, perhaps add Template:Factual accuracy and/or another message in every file with an AI illustration, preferably by a template, perhaps every file that is uploaded by Upload Wizard and where the box has been ticked to indicate that AI image has been uploaded
    2. Differentiation between real and generated images should also be done at category level, categories containing images about real places and persons should not be flooded with fake images; AI-generated images should be in a (sub) category of Category:AI-generated images;
  3. Whether in countries that allow copyright on AI-generated images or not, these images should not be identified simply as "Own work". The AI contribution must always be explicitly acknowledged.
  4. There is at least a very strong preference (can we make it a rule?) that file pages for AI-generated or AI-enhanced images should indicate what software was used, and what prompt was given to that software. Some of us think that should be a requirement.
  5. With very rare exceptions—the only apparent one is to illustrate AI "hallucinations" as such—AI-generated or AI-enhanced images should contain no obviously wrong things, like extra fingers or an object that shouldn't be there; these should be fixed. Probably the best way to do this is to first upload the problematic AI-generated file, then overwrite that with the human-generated correction.

Jmabel ! talk 20:01, 31 December 2023 (UTC)[reply]

Under "good reasons", I'd split point 3 ("Icons, placeholders, diagrams...") into two subcategories:
3A. Icons, placeholders, decorative illustrations, and other images where the precise contents of the image and its accuracy, especially in details, aren't relevant.
3B. Technical illustrations, explanations, blueprints, instructions, abstract drawings, and other images where details are expected to be meaningful and accurate.
And I'm not convinced that 3B is a good reason to use AI-generated images. Most of the image generators I've worked with have been spectacularly poor at producing meaningful technical illustrations; they have a tendency to make up stylized nonsense to fill space, e.g. [3], [4], [5].
As far as requirements are concerned, agreed on all points; including information about the tool used to generate images should be mandatory so that, in the event that an image generation model and its output are declared to be derivative works (which isn't out of the realm of possibility!), Wikimedia can identify and remove infringing images.
Omphalographer (talk) 23:47, 31 December 2023 (UTC)[reply]
The "good reasons" part should not be added to the page. We shouldn't try to define and assume we can anticipate and know all the potential constructive useful applications of AI art. It's like trying to specify for which purposes images made or modified with the software Photoshop or any of its tools like its cut tool can be used for at a time when that software was still new. And I elaborated on lots of useful applications that aren't captured by these explicit reasons. For example, consider if we had no image of how the art style cubism looked like. Then an artistic AI image that illustrates that would be useful. And that is just one example; same for a subject of human art culture of which we have no image such as one of its subgenres or topics. This could be part of essays or be included somewhere (would make the policy too long though) for illustrating use-cases that are most likely to be within scope.
Moreover, what software was used, and what prompt was given to that software agree on the former but regarding prompts that should only be encouraged, not required. For example, if images from elsewhere are added these are often not included and as I explained before, the person may not have saved the prompt or it could be more than ten prompts for one image.
The good reasons part is already implied and delineated by other parts of the policy as well as other policies. Obviously, Most currently available AI datasets include stolen images is false and just makes the anti AI tools bias evident since they weren't stolen (have you still not looked up definitions of theft?) but used for training (or machine learning) similar to how humans can look at (and "learn" from) copyrighted artworks online (or in other public exhibitions) if they are accessible there. Prototyperspective (talk) 17:10, 1 January 2024 (UTC)[reply]
Sneaking insufficiently documented stuff in via Flickr (again) and beeing extremly casual about laws? If the neccessary context for an AI file can´t be provided, that file should be off limits for commons. Donated funds should not end up in some legal battle with stock picture companies a.o.. Just stop it. Alexpl (talk) 22:18, 1 January 2024 (UTC)[reply]
Not sure what you refer to and courts have confirmed this. neccessary context for an AI file They should be provided but it's not the prompt. Donated funds should not end up in some legal battle I'd say start by not throwing donated funds out the window and totally neglecting the software development instead of coming up with hypothetical unlikely horror scenarios…as if they'd sue WMF for WMC hosting a few AI-generated files rather than some other entity (and fail). This is nothing but unfounded scaremongering. But I do agree that the software used to make images should get specified. Moreover, if you really cared about copyvios on WMC there would long be some bot that did at least some tineye reverse searches and so on to suggest editors likely copyvios to review. Prototyperspective (talk) 22:58, 1 January 2024 (UTC)[reply]
If images like the one in this example where uploaded by a regular user it no doubt would be deleted as a derivative of the original. Yet you seem to think such images can't be derivatives if they were created by AI simply because the AI in question was trained on millions of images, which is a totally ridiculous assertion. Same goes for you dismissing the chance of a law suit over something that as unfounded scaremongering. There's a pretty good chance such law suits will happen in the future. The question is do we want to needlessly risk the WMF foundation getting involved in one simply to appease people like you, who for some bizarre reason thinks the precautionary principle doesn't apply to AI artwork. I'd say no we don't.
Using caution when it comes this stuff is in no way unfounded scaremongering. It's the default position when there's any risk of a law suit and its ludicrous to act like there isn't one in this case, whatever the details are of how the software was created. I'll also point out that there's plenty of models trained on extremely small, specialized datasets, which you seem to be ignoring for some reason, and there's an extremely high risk of it creating derivatives in those cases. Admittedly it's smaller with larger ones, but we have no way of knowing how much the AI was trained, what exactly it was trained on, and specialized models are becoming and more common as time passes.
Regardless, there's no valid reason not to assume most currently available AI datasets include stolen image. At least the more popular ones are up front about the fact that they were trained on copyrighted works. There are none that aren't as far as I know. Otherwise there's no reason we can't make an exception specifically for models that were specifically trained on free licensed images, but that doesn't mean we should just allow for anything, including images from models that were clearly trained on copyrighted material. Not only does doing so put Commons at risk, it's also antithetical to the goals of the project. It's not like we can't loosen the restrictions to have exceptions for images created certain models or exclude others as time goes on though. At the end of the day I don't think there is, or needs to be, a one size fits all/works in ever situation way to do this. And the policy will probably be heavily updated as time goes on and the technology changes. --Adamant1 (talk) 05:15, 3 January 2024 (UTC)[reply]
With regard to Midjourney, there are some recent allegations that their model was specifically designed to target specific artists and art styles. Some of the examples in that thread are fairly damning, e.g. [6]. Omphalographer (talk) 07:14, 3 January 2024 (UTC)[reply]
Wow, that's without the person even mentioning cyberpunk 2077 in the prompt to. It must have been nothing more then totally random chance based on the millions of images Midjourney was trained on though lmao. --Adamant1 (talk) 10:38, 3 January 2024 (UTC)[reply]
Yet you seem to think such images can't be derivatives if they were created by AI No, that is not true and I never said or indicated that. That image would need to be deleted. The assertion is not ridiculous but e.g. backed by many sources such as this Stable Diffusion’s initial training was on low-resolution 256×256 images from LAION-2B-EN, a set of 2.3 billion English-captioned images from LAION-5B‘s full collection of 5.85 billion image-text pairs, as well as LAION-High-Resolution, another subset of LAION-5B with 170 million images greater than 1024×1024 resolution (downsampled to 512×512). The lawsuits would affect Midjourney or other large entities and as said we already delete derivatives. If you care about risks of copyvios being hosted then support bots that scan images via tineye or other image reverse searches. stolen image Ignoring what I said about that above. clearly trained on copyrighted material It's not even feasible otherwise and you are also allowed to look at and learn from copyrighted media if they are put public online as said but ignored earlier. Prototyperspective (talk) 12:42, 3 January 2024 (UTC)[reply]
Ignoring what I said about that above. I didn't "ignore" what you said about bots. It's just not relevant to the discussion about what text should be added to the page. Using bots to scan images and editing the essay aren't mutually exclusive, obviously. Anyway, there's no reason you can't start a separate thread to discuss using bots to check for derivatives if you think it's important, but your general attitude around the subject has been that AI artwork is original due to the amount of images models are sometimes trained on. Otherwise I guess you agree that "Most currently available AI datasets include stolen images," or at least can re-create them, but that clearly wasn't how you were acting. --Adamant1 (talk) 20:32, 3 January 2024 (UTC)[reply]
Briefly, it can be original or unoriginal in the sense of being a derivative work: it depends on what the image depicts. AI models aim to deliver high quality results currently seem to have to rely on algorithmic and selection proxies for quality (such as increasing the weighting of popular professional art vs deviantart furry drawings); the earlier links are interesting despite that these things happening rarely, are spottable via image categories/description or image reverse search, seem to only occur for very well-known images for certain texts/prompts, and depend heavily on prompt used. Prototyperspective (talk) 20:50, 3 January 2024 (UTC)[reply]
the earlier links are interesting despite that these things happening rarely It's probably a lot more common then that since your talking about some models generating a near infinite amount of images per second, or at least enough of a magnitude that it would be impossible to come up with original works of art that quickly and in those numbers. We don't really have any way of knowning the exact number though. Nor is relevant either. Since the important thing is if the amount of derivatives generated by AI models is enough to justify including something about it in the article, be that a sentence like "Most currently available AI datasets include stolen images" or something similar, I don't really care. But it's clearly a problem and at least IMO one worth mentioning. Regardless of how it's ultimately dealt with. People should still be cautious not to assume AI artwork is original even if bots can scan websites for similar images or whatever. Again, they aren't mutually exclusive and it's not like we don't already have policies reminding people that certain things might be or are copyrighted. So I don't really see what your issue with adding that part of Jmabel's text into the essay is. --Adamant1 (talk) 22:24, 3 January 2024 (UTC)[reply]
@Jmabel: I've added a recommendation to the licensing section per your suggestion. Nosferattus (talk) 00:21, 6 January 2024 (UTC)[reply]

Relevant deletion discussion[edit]

 You are invited to join the discussion at Commons:Deletion requests/File:Bing AI Jeff Koons Niagara take 3.jpeg. {{u|Sdkb}}talk 15:41, 3 January 2024 (UTC)[reply]

Need a template for AI-modified images[edit]

From recent discussions (here and at the Village Pump), it sounds like we need a specific template for AI-modified images (separate from {{AI upscaled}}). Belbury, is this something you could help with? Nosferattus (talk) 00:29, 6 January 2024 (UTC)[reply]

I was actually saying to another user recently (regarding a photo of an ancient metal statue where the resolution was unchanged but the face was given a subtle flesh tone, its nose filled out and its eyes "corrected") that {{AI upscaled}} should also apply to any image where an AI has added or altered details. We don't, I think, really care whether the pixel size of the image was increased.
Maybe we should just rename and rephrase the template to {{AI modified}}? Belbury (talk) 18:41, 6 January 2024 (UTC)[reply]
That would be fine with me. Nosferattus (talk) 02:12, 8 January 2024 (UTC)[reply]

Should something like the following text be added to the “are AI images in scope?” section[edit]

AI images should not be used to illustrate topics which already have quality non-AI images; however AI images should not other[wise] be treated differently from other files in regards to scope. Something is not in or out of scope because an AI made it. In-scope AI-generated files should not be nominated for deletion simply because they could hypothetically be done better by a human, or hypothetically generated on demand.

I thought of adding this because I’m seeing a lot of deletion arguments that are basically “AI is cheap because anyone can make it anytime” or “AI is inherently bad” and an unwritten attitude that “AI images don’t need to be reviewed individually when nominating them for deletion because of that”. I think we should make it clear that AI having unique problems doesn’t mean it should just be indiscriminately targeted on the basis of being AI.

Dronebogus (talk) 11:42, 8 January 2024 (UTC)[reply]

Would  Support it but due to the large anti-AI bias here I don't know if it has a good chance since clearing up such issues is usually not done on policy/essay pages like that. I also note that people should claim things are so super easy based on what they think not what they know and prove. Additionally we don't have any such discrimination for other tools in people repertoires such as Photoshop. Prototyperspective (talk) 12:08, 8 January 2024 (UTC)[reply]
I oppose this. This lets through entirely fictional representations of (for example) people of whom we have no image, just because they were generated by AI. We would never accept the same entirely fictional representation if it were drawn by a human. - Jmabel ! talk 19:55, 8 January 2024 (UTC)[reply]
No, nothing gets through "just because they were generated by AI". It's just that nothing gets deleted more or less basically just based on that. For images of people, what would be the problem of having an image made using AI like these (accurate + high quality) of a person of whom we have no image of? It doesn't mean Wikipedia needs to use it. In any case that is just a very particular example and the proposal would not imply that they are fine. I don't care much about this proposal since I don't think such things would need to be explicit or that it being explicit would necessarily help much. Also there are lots of drawings of historical figures so that part is also clearly false. Prototyperspective (talk) 20:05, 8 January 2024 (UTC)[reply]
It's one thing to have (for example) Rembrandt's painting of Moses, or a 15th-century wood cut showing an unknown author's conception of François Villon, but if we didn't have those, a random user's illustration of either would presumably remain out of scope. I don't see why an AI-generated illustration would be any more acceptable. - Jmabel ! talk 21:03, 8 January 2024 (UTC)[reply]
I would say the two examples you've presented here are very high quality, and I wouldn't at all hold it against them that they were produced by AI. Equivalents by a user would probably be considered in scope as well. (This presumes no copyright issues about them being possibly derivative.) But only a tiny fraction of the AI-generated content I see here is anywhere near this level. Plus, we know that these genuinely resemble the people in question. - Jmabel ! talk 21:07, 8 January 2024 (UTC)[reply]
However, I do question the claim of "own work" on those two images. - Jmabel ! talk 21:08, 8 January 2024 (UTC)[reply]
They aren't any more acceptable but not less acceptable and if the image was CCBY it could be there; maybe you just never took a look at the visual arts cats so I don't know why you have these false assumptions; the reason why there are barely any modern high-quality ones is that artists nearly never license them under CCBY. In any case you seem to refute yourself in regards to those two images and the images are clearly labeled as made via Midjourney by the WMC user who uploaded them whom I thank for their constructive contributions. But yes, the AI content here is sadly largely significantly below that level of quality. In any case, nothing is "let[] through" just because it was made via AI. Prototyperspective (talk) 21:20, 8 January 2024 (UTC)[reply]
@Prototyperspective I don't find either of those images to be "accurate and high quality". Look at the image of Gandhi. The nose is wrong. The body is too small for the head. The ear is wrong. The splashes of colour (including the one on the head) may add visual interest but they detract from the main point which is what Gandhi looked like. The image of Hawking doesn't even look like him. I would not use either of these pictures. I understand your interest in AI generated images, but let's not confuse a good-looking image with an accurate depiction. Counterfeit Purses (talk) 04:32, 9 January 2024 (UTC)[reply]
We have human-made drawings of people with no free image i.e. Syd Barrett that are usable and in-use. Just because many AI portraits are inaccurate doesn’t mean one that is doesn’t potentially exist. Dronebogus (talk) 08:07, 9 January 2024 (UTC)[reply]
Many human-made artworks are also somewhat inaccurate.
Please look at the category instead of the two examples; I'm sure there are multiple that are of sufficient quality and accuracy even if one could argue that these two aren't. In any case, I don't know why we discuss this specific potential application; there are many other potential ways one could use AI images – it's just that people should pay attention to the accuracy of the image, just like for any other artwork, and probably should clarify wherever it's used that it's AI-made. Note that the use doesn't have to be Wikipedia but it could also be useful there in some occasions. AI software are a new tool used by humans, we don't discriminate against other tools just for the tool itself that is used such as Photoshop. Is there a similar discrimination against photos modified with Adobe Lightroom or are people simply deciding whether or not they use the image anywhere (Wikipedia or elsewhere) on a case-by-case basis? Prototyperspective (talk) 11:26, 9 January 2024 (UTC)[reply]
 Oppose. While I'm sure this represents your personal views on AI-generated content, it does not represent the views of the Commons community at large. Omphalographer (talk) 04:29, 9 January 2024 (UTC)[reply]
But are the “views of Commons at large”, which seem to be based on subjective distaste for AI images, right simply by popularly, especially when they frequently conflict with hard-and-fast principles like COM:SCOPE or COM:INUSE? Dronebogus (talk) 08:09, 9 January 2024 (UTC)[reply]

@Prototyperspective: you say you are not wanting to privilege AI images, but you write, "In-scope AI-generated files should not be nominated for deletion simply because they could hypothetically be done better by a human, or hypothetically generated on demand." This begs the question. Obviously, if it's in scope and doesn't have copyright problems we keep it, but often a DR is how we determine whether it is in scope. - Jmabel ! talk 18:36, 9 January 2024 (UTC)[reply]

No, I didn't write that. And I don't see how you come to this conclusion – that's not at all what this implies. It just says that it shouldn't be nominated merely "because they could hypothetically be done better by a human, or hypothetically generated on demand" and I don't know how to explain or rephrase it clearer than it already is…it doesn't mean AI images can't or should never be nominated for deletion. If you see any image out of the 100 million and don't consider it within scope, you can nominate it (hopefully including an explanation) – nothing about that is proposed to be changed here. --Prototyperspective (talk) 21:43, 9 January 2024 (UTC)[reply]
@Prototyperspective: I'm sorry: [User:Dronebogus]] wrote it, you concurred. Also: I just now made a minor edit there, for something that was not even grammatical.
I suppose it could be understood your way as well, but I'm still concerned with the question-begging. I think it comes down to what we mean by "other files" in the statement that they should not "be treated differently from other files in regards to scope." If that means art by non-notable users, yes, I'd agree to that. If it means pretty much anything else I can think of, I would not. - Jmabel ! talk 02:20, 10 January 2024 (UTC)[reply]
Oppose - Our scope rules should be delineated at COM:SCOPE not here. The less we say about scope here the better, other than simply reminding people that AI-generated media follow the same scope rules as all other media on Commons. There should be no rules to either favor or disfavor AI-generated media when it comes to scope. Nosferattus (talk) 20:10, 9 January 2024 (UTC)[reply]
 Oppose I see little reason in binding volunteer personel in judging the "scope" of every submitted AI-work. And when the point is reached, where we drown in AI sludge and drasitic measures have to be taken, the "scope" won´t matter anyway. Alexpl (talk) 11:12, 18 January 2024 (UTC)[reply]

An alternative[edit]

I just thought of this, based on a remark I just made above: how about Except for certain enumerated exceptions [which we will have to spell out], AI-generated art will be held to the same standards as non-photographic original work by a typical, non-notable user. The exceptions would include at least some of the items listed above in the section #Possible alternative/additional text for this page, in the portion beginning, "That said, there are good reasons to host certain classes of AI images on Commons". And, yes, it may be that this belongs more on COM:SCOPE than here. - Jmabel ! talk 02:27, 10 January 2024 (UTC)[reply]

That’s basically what it already says. I’m not sure what “enumerated exceptions” we need to spell out. “In-scope works”? Now we’re getting in a rut! Dronebogus (talk) 05:30, 10 January 2024 (UTC)[reply]
@Dronebogus: On the enumerated exceptions, most obviously images to illustrate facets of AI art production. We'd presumably like a certain number of examples of the output of any notable generative AI. But see the list I referred to in my original remark, which came out of a discussion involving a dozen or so users on the Village pump.
“In-scope works”: I don't see that phrase anywhere in what I wrote. What am I missing? - Jmabel ! talk 16:13, 12 January 2024 (UTC)[reply]
I think the "non-photographic original work" is a good point to make. I'd certainly support that.--Prosfilaes (talk) 17:19, 12 January 2024 (UTC)[reply]
Yeah, and that seems like it's a valid point beyond AI-generated media as well. Broadly speaking, Commons holds non-photographic images to higher standards than photos. We generally pare diagrams, charts, logos, flags, and other non-photographic content down to one or perhaps a few definitive, high-quality images, whereas photos of potentially useful subjects are only culled if they're unusually bad. (And that's good! Mediocre non-photographic images can be improved through collaborative editing; mediocre photos are improved by taking better photos.) Omphalographer (talk) 17:54, 12 January 2024 (UTC)[reply]
That’s a good point. But I still think people are being unnecessarily harsh on AI to the point of steamrolling over COM:INUSE in massive indiscriminate culls. I’m not trying to unfairly favor AI images, just prevent this sort of behavior by marking it as explicitly disruptive. Dronebogus (talk) 19:04, 12 January 2024 (UTC)[reply]

Wikimedia Commons AI: a new Wikimedia project[edit]

The discussion takes place at the discussion page of the proposal on Meta-Wiki.

I may have thought of a solution. See my proposal for a new Wikimedia sister project here. I would love to discuss this proposal with you on the discussion page of the proposal! Kind regards, S. Perquin (talk) 22:49, 19 January 2024 (UTC)[reply]

Declaration of AI prompts[edit]

when an image is determined to be ai generated, but the file descriptions, sdc, etc., have nothing indicating it is, should it be a requirement for the uploader to provide the methodology (ai models used, prompts given, etc.)?

and if the uploader doesnt provide the info, what should be done? RZuo (talk) 14:24, 11 February 2024 (UTC)[reply]

Are there any ways to confirm what prompts have been used in the generation of AI images? EdoAug (talk) 14:27, 11 February 2024 (UTC)[reply]
No, not really. Moreover, a prompt isn't even sufficient to reproduce an output in a lot of newer hosted AI image generators like Midjourney, as generating an image is an interactive process, and the underlying model and software are modified frequently. With this in mind, I'd consider the inclusion of a textual prompt to be of limited value.
Including information about what software was used is much more useful, and I'd be on board with requiring that. Ditto for uploading the original image when using tools which transform images. Omphalographer (talk) 23:42, 11 February 2024 (UTC)[reply]
@Omphalographer: I'm trying to understand: if, for example, someone uploads an image as an AI representation of the Temple of Solomon, would it not be important whether "Temple of Solomon" was somewhere in the prompt? Or are you saying that it would be so likely out of scope on some other basis that it doesn't matter? Or what? - — Preceding unsigned comment added by Jmabel (talk • contribs) 03:09, 12 February 2024 (UTC)[reply]
Why would it matter whether "Temple of Solomon" was in the prompt? How the software got there is only relevant if the image is being used as an example of AI; if it's being used as an example of the Temple of Solomon, then the proof is in the pudding; does it accurately represent what it's intended to represent? Which is not really the "Temple of Solomon"; it's one conception of what that might have looked like, since we have no photographs or remains or drawings from life.--Prosfilaes (talk) 21:18, 12 February 2024 (UTC)[reply]
We have, essentially, no way to know whether a particular representation of the Temple of Solomon is accurate. It might be in scope to know what (for example) Stable Diffusion does to represent the Temple, but if you prompted Stable Diffusion, for example, "draw me the Mormon Temple in Salt Lake City" and then posted it as a representation of the Temple of Solomon, that is almost certainly not acceptable. - Jmabel ! talk 01:29, 13 February 2024 (UTC)[reply]
Definitely not acceptable. More like "ArtStation" material. Alexpl (talk) 15:10, 13 February 2024 (UTC)[reply]
Yes, we have no way to know whether a particular representation of the Temple of Solomon is accurate. I don't see why we should treat an AI version any different from any other version; if it is useful as an image of the Temple of Solomon, we keep it. Asking for a prompt is a bit like asking for the palette used for a painting an illustration of a dinosaur; everything you really need to know is in the final version.--Prosfilaes (talk) 15:31, 13 February 2024 (UTC)[reply]
Asking for a prompt is like asking the painter what they think they just painted. If they say ancient temple in Phoenician style with two bronze pillars supporting... at great length then we'd use that context to assess whether the image was of any educational use to anyone; if they said old stone temple breathtaking painterly style or Salt Lake City temple 10,000 BC we'd know that we could rename it or throw it out straight away.
File:Urânia Vanério (Dall-E).png (now renamed) was once added to that person's biography article on enwiki, the image uploader saying that it was AI-generated but not giving the prompt. So I had to waste a little time working out what I was actually looking at, and it turned out when asked that the uploader had just prompted an AI to draw 10 years old 19th century Brazilian-Portuguese girl. Belbury (talk) 15:54, 13 February 2024 (UTC)[reply]
Asking for a prompt is like asking the painter what they think they just painted That is something only people with very little practical experience of using AI art tools would say and it is false in most or many cases. File:Urânia Vanério (Dall-E).png That case does not seem to be good assuming it's not based on an artwork of the specific person. Thus it should probably be removed from where it's used if that is the case. Prototyperspective (talk) 16:40, 13 February 2024 (UTC)[reply]
I believe I've got a good handle on how AI art tools function, thanks.
The uploader of File:Urânia Vanério (Dall-E).png suggested in the filename that they'd intentionally created an image of Urânia Vanério, but they knew that the image they created was produced from entering the generic prompt 10 years old 19th century Brazilian-Portuguese girl - the uploader knew they hadn't really created a picture of Vanério specifically, only someone generally of that century. That's very useful context for us when deciding if we need to rename or remove an image, so from that perspective it seems worth asking for clear prompts at the point of upload. Belbury (talk) 17:29, 13 February 2024 (UTC)[reply]
Yes in that case it was good to ask/require the prompt/the info on how they made the image. Very much agree with that, thanks for getting the user to disclose the prompt, and as said the image's file-use seems inappropriate. It can be very useful context and for example showing an optional field for prompt(s) used for the image if an AI tool was used could be a good thing. Prototyperspective (talk) 18:09, 13 February 2024 (UTC)[reply]
I think it would be good if the user was required to confirm that it's AI-generated and should also specify which tool was used such as "Stable Diffusion" or a link of the web interface used.
Btw, prompts are often not known anymore by the prompter (and not stored by the website) and there can be 20 prompts used for just one image. While I used to think attaching prompts are good I now think not attaching them is better since when they attached people who have little experience with AI tools misinterpret things and object to the image based on flawed understandings of how these tools work in more elaborate or more intentional creative processes. Prototyperspective (talk) 15:00, 11 February 2024 (UTC)[reply]
When prompts are attached, the file is at least potentially useful as a illustration of what that [evolving] software did when given such a prompt at such a date. Otherwise, it's a big hunk of nothing. - Jmabel ! talk 20:08, 11 February 2024 (UTC)[reply]
It's not anything 'by default'. An image made or modified with Photoshop is not valuable by default. It depends on its contents / what it shows. Really simple. Prototyperspective (talk) 20:31, 11 February 2024 (UTC)[reply]
Not sure how Photoshop got into this discussion. A photo modified with Photoshop is a modified version of a particular photo. Without knowing the prompt, an image made by AI is nothing more than "an image made by AI". It is not clearly an image "of" anything. - Jmabel ! talk 23:13, 11 February 2024 (UTC)[reply]
It´s called "Whataboutism". A russian thing. Alexpl (talk) 21:34, 18 February 2024 (UTC)[reply]
No, it's exceptionalism. AI tools are not special. No actual point(s) here, just derailing. Prototyperspective (talk) 21:52, 18 February 2024 (UTC)[reply]
It's a good idea to read project pages before commenting on them on their associated talk page.
Commons:AI-generated_media#Description says (and has already said for a good while):

Whenever you upload an AI-generated image (or other media file), you are expected to document the prompt used to generate the media in the file description, and identify the software that generated the media.

HaeB (talk) 01:08, 1 March 2024 (UTC)[reply]
so what happens when they dont do that?
it's a good idea to read before commenting on my original post:
"should it be a requirement for the uploader to provide the methodology (ai models used, prompts given, etc.)? and if the uploader doesnt provide the info, what should be done?" RZuo (talk) 13:56, 1 March 2024 (UTC)[reply]
It should be a requirement for them to at least say what model or models were used. The prompts aren't as important since generation is usually random anyway, but providing them should still be a recommendation, if not something that requires action like not providing the model (at least IMO) should. Although I'm not going to go as far as saying making it a requirement to provide the model or models used means the images should be deleted if they aren't. At least not as a first line of action. There's no reason we can't assume good faith and ask the uploader. Then proceed with nominating the image or images for deletion if there's a reason to. I image there might be some edge cases where the model doesn't have any meaning what-so-ever though. So I hesitate to say I think it should be a hard and fast rule. --Adamant1 (talk) 14:04, 1 March 2024 (UTC)[reply]
there's one more problem with failure to provide the prompts.
many websites host many ai generated images for users to search and use, e.g. https://openart.ai/search/xi%20jinping?method=prompt .
when the uploader is unable to provide the prompts, it's unclear whether the uploader is the true author who initiated the prompt and generated the file, or s/he just downloaded a file generated by someone else from these websites.
and since some countries rule that copyright can exist and belong to the person who did the prompt, ai generated commons files with unclear authorship should be deleted. RZuo (talk) 10:54, 14 March 2024 (UTC)[reply]
There's a few problems with that: 1) the prompt could be made up or it could be only a part of the full prompt (not the full text or only one of multiple prompts used for the creation of one image being provided), 2) the image could also be taken from or first uploaded to such a site by either the same user or another user (so it would still be unclear), and 3) prompts can be long and not entirely relevant to what the image is about so without any changes that would lead to such images showing up in more unrelated search results.
In addition, there's also 4) issues arising from readers who don't understand how txt2img prompting is used in practice (such as using examples or proxy-terms or otherwise hacking toward the desired image) such as misinterpreting images based on what prompt has been used (e.g. just because this or that is somewhere in a long prompt doesn't mean the category for this or that is due). Also just FYI, there could also be 5) some txt2img platforms add prompts for selectable "styles" where the prompt text is not even known by the prompter so it would not be the full prompt for these sites. Such platforms where the prompt is shown, including playgroundai and the one you mentioned, could be used to collaboratively develop art and illustrations for the public domain as well as learning from how other achieved good results. More info about UK genAI copyright would be useful, the source of images should always be specified.
Prototyperspective (talk) 11:45, 14 March 2024 (UTC)[reply]
the prompt could be....only one of multiple prompts used for the creation of one image being provided @Prototyperspective: Can you provide an example of where you'd use multiple prompts to create a single image? Like what software has multiple prompts outside of a different section for negative keywords (which at least IMO isn't really a prompt to begin with)? --Adamant1 (talk) 16:10, 14 March 2024 (UTC)[reply]
I did so roughly three times already but since that was elsewhere, a workflow could be like this: create an image of a person, crop that person out and generate another image with another prompt, paste in the person and do the same for two objects like a vehicle and a building, then use the resulting image and use img2img to alter and improve the image to change the style, then use yet another prompt to fix imperfections like misgenerated fingers, then for the final image use the result as an input with some proxy-terms for high-quality images but change the output resolution so it's scaled up to a high-resolution final image. At least two examples of mine where I included multiple prompts were deleted for no good reason, such as the DR being a keep outcome but the admin deleting the particular image anyway. Not everyone briefly types "Funny building, matte painting" clicks generate three times and is done with it. Prototyperspective (talk) 16:24, 14 March 2024 (UTC)[reply]
I mean, I guess that could happen, but then so what? Just include the last prompt. At least according to you AI art generator's are no different then photoshop and it's not like we include the title every single piece of software that touched an image along the pipeline to it being uploaded in a description. So I don't see why it would be any different for AI artwork. Although I think your example is extremely unlikely, if not just totally false hyperbole, to begin with. --Adamant1 (talk) 17:26, 14 March 2024 (UTC)[reply]
I'm not an advanced prompter by any means and I used workflows coming close to that so it's not exaggerated. I think the quality of AI images as well as the sophistication of txt2image workflows is going to increase over time. I'm not opposed to asking for prompts (in specific) – I was just clarifying complications. And I've seen many cases where the last prompt was as simple as one word on such prompt sharing websites. In the example workflow the last prompt could be the least important. The info on potential and/or identified complications could be more relevant in the other discussion about the Upload Wizard. Prompts are usually good to include and useful but they don't somehow verify that the uploader is the prompter who should if they tasked the AI software specify that. Prototyperspective (talk) 17:37, 14 March 2024 (UTC)[reply]
I don't disagree. It's not like someone can't just copy a prompt from wherever they downloaded the image. I can't really think of any other reason they would be useful though. Yet alone why they would or should be required. At least other then the novelty factor of someone being able to roughly recreate the image if they want to, but then I feel like it's not the purpose of Commons to be a prompt repository either. Especially if that's main or only reason to include them. --Adamant1 (talk) 17:48, 14 March 2024 (UTC)[reply]
"I can't really think of any other reason they would be useful though." Here are a few:
  1. AI is constantly evolving. It will probably be of interest five years from now to know what a particular AI would have done in March 2024, given a particular prompt.
  2. If the image is to be used to illustrate anything other than what a particular AI program does -- that is, if it supposedly represents anything past or present in the real world -- presumably we should be interested in whether the prompt actually asked for the thing supposedly represented. For example, if someone is claiming that an image is supposed to represent George Washington as a child, presumably the result of a prompt asking for just that has more validity than just a prompt for "18th-century American boy." Or maybe not, but if not then no AI representation of this subject has any validity at all. - Jmabel ! talk 21:26, 14 March 2024 (UTC)[reply]
  1. @Jmabel: To your first point, I'd say prompts are a reflection of what a particular user of the software thinks they need to put into a textbox to generate a certain image. That's about it though since images are never generated consistently and a lot of keywords just get ignored during the generation process. Although I would probably support a side project Wikibooks for storing prompts for their novelty and historical usefulness, but then the point in Commons is to be a media repository, not an encyclopedia, and I don't think that's served with storing a bunch of prompts that were probably ignored by the AI and aren't going to generate the same image anyway. Maybe if there was a 1/1 recreation of an image based on a particular prompt, but no one is arguing that is how they work.
  2. On the second thing, I think your last sentence pretty much summarizes what my argument there would be. An AI representation of George Washington as a child has no validity to begin with largely because of the reasons I've already stated in the first paragraph as to why prompts don't matter in the first place. An AI representation of George Washington as a child doesn't somehow magically become legitimate just because it includes a prompt that will probably generate an image of a compeletely different child if someone were to use it. --Adamant1 (talk) 08:49, 15 March 2024 (UTC)[reply]
    Amendment to 1.: they would recreate 1:1 the same if the parameters and seed were the same with the same model.
    The key there is the seed so it would at least be very similar if the same seed was used. A prompt that works well for one seed is not unlikely to also work well with another one but it could look very different. playgroundai and openart.ai include full prompts, seeds, and parameters. 2. For things like George Washington as a child the AI would need to semantically understand which training data shows the person (or an object or concept) as done at low-quality here using DreamBooth because models currently don't get semantics. In the file description there is more than a prompt. Prototyperspective (talk) 11:24, 15 March 2024 (UTC)[reply]
The most obvious example - to me, at least - would be something like Midjourney. There's an initial textual prompt, but it's often followed by a sequence of operations like selecting individual variations and/or expanding the canvas which alter the image in ways which go beyond the prompt. And even then I'm not sure how repeatable of a process it is. Omphalographer (talk) 17:49, 14 March 2024 (UTC)[reply]
I guess so. Although I think your quickly reaching a point of diminishing returns when it comes to ever recreating the same, or a similar, image at that. --Adamant1 (talk) 17:54, 14 March 2024 (UTC)[reply]

AI upscaling of historical paintings[edit]

What is Commons' stance on the AI upscaling of historical paintings?

I was surprised to see that File:Bernardetto de Medici - Giorgio Vasari.jpg, generated by taking a low resolution photograph of a painting and asking an AI to enlarge it and increase the saturation, actually survived a specific deletion request with "no valid reasons for deletion" last year.

For an upscaled low-quality photograph there's some argument to be made that the AI might have a better idea of what a person might have looked like than the grainy photograph does, but unless an AI has been scrupulously trained on the work of the specific artist, scaling up a painting seems like it will always be misleading and out of educational COM:SCOPE. Belbury (talk) 09:59, 27 February 2024 (UTC)[reply]

  • The claim in the license area that "this is a faithful photographic reproduction of a two-dimensional, public domain work of art" seems false to me. Not that there is a licensing problem, but this is not a "faithful photographic reproduction," it's a derivative work. - Jmabel ! talk 19:11, 27 February 2024 (UTC)[reply]
  • Does that matter? That claim is important because a process that's only a "faithful photographic reproduction" is taken to not have created some new protectable copyright, thus we can host the overall image as the original was otherwise clear. If some additional process does something more then that might attract a copyright. But not if it's an AI process (as we're using an assumption that AIs can't create copyright materials) or either that the upscaling was done by some scanner (sic) here, who is happy to license any contribution that their own efforts have made. Andy Dingley (talk) 01:42, 1 March 2024 (UTC)[reply]
    • It doesn't matter in licensing terms, but it does in documentary terms. It is a made-up extrapolation, and should not be reused under the misimpression that it is a faithful reproduction. - Jmabel ! talk 05:25, 1 March 2024 (UTC)[reply]
I'd be inclined to agree. Using AI upscaling on a low-resolution photograph of a painting will inevitably result in an image which differs from the original work. No matter how good an upscaler is, it'll never know the difference between a detail which was blurry because it's working from a low resolution photo and a detail which was blurry because the original painting was indistinct (for example). Worse, a lot of AI upscalers are primarily trained on photographs, and will frequently add details in an inappropriate style, like inferring photorealistic, smooth-shaded faces in engravings where those details should have been rendered with patterns of lines.
As far as that file is concerned, I'd be inclined to renominate it with a more clear rationale, focusing on the fact that it's not an accurate representation of the original piece. Omphalographer (talk) 19:54, 27 February 2024 (UTC)[reply]

How do you know an image is not a person's attempt to imitate ai-generated image?[edit]

how do you know an image is certainly ai-generated? what if a person does an artwork manually, but in a way to imitate "ai style" and give audience an impression that it's ai-generated?

File:Global Celebration Of New Years.jpg is what prompts me to ask these questions and #Declaration of AI prompts. it fails because in the centre foreground it's imitating rather than using actual cjk chars, so it looks like ai-generated, but then it can equally likely be an artwork by a person without knowledge of cjk chars. RZuo (talk) 14:04, 1 March 2024 (UTC)[reply]

I think the question usually rather is whether it's AI-generated or not, where manual artwork misidentified as AI-generated is less of an issue so far. It's basically the same question and there's Category:Unconfirmed likely AI-generated images to check for these. Note that often it can be that only a part of a manual image or a step of the otherwise manual process is AI-supported or AI-generated. Prototyperspective (talk) 14:26, 1 March 2024 (UTC)[reply]
Simple answer: You can´t know. From this point on, everything on Commons, without proper provenience, is basically some sort of fraud. The companies, which offer such services, should really provide provenience for the output their AI creates. Alexpl (talk) 23:00, 2 March 2024 (UTC)[reply]

More food for thought: Youtube's AI guidelines[edit]

https://blog.google/intl/en-in/products/platforms/how-were-helping-creators-disclose-altered-or-synthetic-content/

As well as some additional examples at: https://support.google.com/youtube/answer/14328491

Youtube now requires disclosure for videos containing certain categories of potentially deceptive content. Some of categories they've defined for material requiring disclosure may be worth considering with regard to Commons policies on AI. Omphalographer (talk) 18:38, 18 March 2024 (UTC)[reply]

We do. It's called {{PD-algorithm}}. Anything more than this would not serve any purpose other than making uploading AI images an burden on the uploader. --Trade (talk) 23:18, 6 April 2024 (UTC)[reply]
Nobody cares about burdens for AI-uploadres. To put it mildly. Alexpl (talk) 13:51, 7 April 2024 (UTC)[reply]
Not nobody. Personally, I strongly object to the type of game playing we see in politics all the time, where one side can't ban something, but puts up onerous rules against it for the sole point of discouraging it.--Prosfilaes (talk) 18:45, 7 April 2024 (UTC)[reply]
Would you like me to start working on consensus for a ban (or at least a moratorium) on AI-generated files? So far, I've been trying to avoid that. - Jmabel ! talk 20:38, 7 April 2024 (UTC)[reply]
Hey, don't threaten me with a good time. Omphalographer (talk) 04:29, 8 April 2024 (UTC)[reply]
If you must. I just oppose the idea that nobody cares about undue burdens on users.--Prosfilaes (talk) 20:15, 9 April 2024 (UTC)[reply]
For a media repository hosting free-to-use images, sounds, videos - AI created content is, and will allways be, pure poison. So (...) Alexpl (talk) 07:44, 8 April 2024 (UTC)[reply]
That's your opinion. If you can't get a consensus on that, I think it hostile and damaging to the community to toss up rules that "would not serve any purpose other than making uploading AI images an burden on the uploader."--Prosfilaes (talk) 20:15, 9 April 2024 (UTC)[reply]
People on here coddle and pander to uploaders way to much. The paternalism towards uploaders is just bizarre. Especially in this case consider how much AI artwork is absolutely worthless trash. I really don't the urge to put everyone else out just to indulge a group of users who probably don't care and are just using Commons as a personal file host for their fantasy garbage to begin with. And no that doesn't everything generated by AI is worthless or shouldn't be hosted on Commons, but there's a cost to benefit here that's clearly slanted towards this being a massive time sink that could potentially be detrimental to the project without proper guidelines. But hey who cares as long as we can be needlessly over protective of uploaders by not doing anything about it right? --Adamant1 (talk) 23:42, 9 April 2024 (UTC)[reply]
Yeah. Or to use the language of "the community" - Я согласен. Alexpl (talk) 11:59, 11 April 2024 (UTC)[reply]
It's not too much to ask for policies to have a purpose other than spite Trade (talk) 22:55, 20 April 2024 (UTC)[reply]

Possibly of future laws[edit]

I am just gonna bring this up for all ai generated media for Wikimedia.

In March of this year Tennessee passed the Elvis Act. Making it a crime to copy a musician’s voice without permission. Also there is this.

Yes I know this is all very recent but I do think Wikimedia should be cautious. Because it seems new laws can harm Wikipedia’s usage of ai generated media.CycoMa1 (talk) 19:07, 14 April 2024 (UTC)[reply]

I hope so. Alexpl (talk) 12:47, 18 April 2024 (UTC)[reply]
There's also a a bipartisan bill in the United States that would require the labeling of AI-generated videos and audio. Although I don't think it's passed yet, but I think that's probably the direction we are going in. It would be interesting to know how something that like could even be enforced on Commons if such a bill ever passes though. Probably the only way to is by banning AI generated content in some way, if not just totally banning it outright. --Adamant1 (talk) 12:53, 18 April 2024 (UTC)[reply]
The only way to label AI generated videos is to ban them? That's deeply confusing.--Prosfilaes (talk) 14:12, 18 April 2024 (UTC)[reply]
I know your just taking what I said out of context, but that's why I said it would be interesting to know how something like that could be enforced. I don't know what else we can do other then ban AI generated videos if there's no way workable way to label them as such though. Of course we could include said information in the file description, but then it sounds like at least the bill I linked to would require the files themselves to be digitally watermarked as AI generated, which of course would have no control over. And that's probably where the banning would come into play. Not that I think your question was genuine in the first place though. Otherwise, what's your proposed way to enforce such a law if one were to be passed other then banning AI generated videos that don't contain the digital watermarking? --Adamant1 (talk) 14:45, 18 April 2024 (UTC)[reply]
Well, we can label them: If no comprehensible provenance can be provided - it´s treated as AI content. Alexpl (talk) 15:11, 18 April 2024 (UTC)[reply]
Perhaps I was taking it in the exact context it was provided in, one that made no mention of digital watermarking in the video. Moreover, if such a thing is required, most reputable sources will provide it in the first place. We could add digital watermarking ourself, and yes, a ban on videos that don't have digital watermarking is possible. Note that you jumped to "just totally banning it outright", instead of requiring or adding labels.
And Commons:Assume good faith.--Prosfilaes (talk) 15:29, 18 April 2024 (UTC)[reply]
I'd buy that, but my comment was made in the context of the article about the bill which pretty clearly mentions how it would require digitally watermarking videos. It's not on me that you didn't read said article before replying to my message. Also in no way did I "jump" to just totally banning it outright. I pretty clearly said "Probably banning it in someway." That's obviously not "totally banning it outright." Although totally banning it is an option in absence of anything else. But nowhere did I say that should be the only or main solution. It's kind of hard to assume good faith when your taking my comment out of context and reading negative intent behind it that isn't even there to begin with. --Adamant1 (talk) 15:46, 18 April 2024 (UTC)[reply]
Meh, if Commons can find ways to get around 18 USC 2257 then this shouldn't be an issue Trade (talk) 22:58, 20 April 2024 (UTC)[reply]

What shouldn't AI-generated content be used for?[edit]

Let's take a break from the endless circular discussions of "should AI media be banned" for a moment. Something I think we should be able to, at least, reach some consensus on is a canonical list of things that AI-generated images aren't good for - tasks which AI image generators are simply incapable of doing effectively, and which users should be discouraged (not banned) from uploading.

A couple of examples to get the discussion started:

  • Graphs and charts. AI image generators aren't able to collect, interpret, or visualize data; any "graphs" they produce will inevitably be wildly wrong.
  • Technical and scientific diagrams - flowcharts, schematics, blueprints, formulas, biological illustrations, etc. Producing accurate images of these types requires domain knowledge and attention to detail which image generation AIs aren't capable of. Some examples of how this can go horribly wrong were highlighted in an Ars Technica article a few months ago: Scientists aghast at bizarre AI rat with huge genitals in peer-reviewed article.
  • Images of people who aren't well represented in the historical record by photographs or contemporary artwork. AI is not a crystal ball; it cannot transform a name or a broad description of a person into an accurate portrait.

Thoughts? Omphalographer (talk) 23:07, 20 April 2024 (UTC)[reply]

Since the landscape is constantly changing with new features and skillsets being added, there is not much (beyond the ability to accurately draw people without sourcematerial) we can ultimately rule out. Just forbid AI´s useage for anything but representation of it´s own work. Alexpl (talk) 11:54, 21 April 2024 (UTC)[reply]
The problem is that the pro AI artwork people just throw their chicken nuggies at the wall and foam at the mouth about how original works aren't true representations of historical people either. If anyone wants a example just read through the walls of text in Commons:Deletion requests/File:Benjamin de Tudela.jpg. There's no reasoning with people like that, and unfortunately they are the one's who ultimately have the say here. Sans the WMF ever coming out with a stance on it one or another. I think that's really the only solution at this point though. We clearly aren't going to just ban AI artwork, nor should we, but I also suspect there's never going to be reasonable guidelines about it without us being forced into implementing them by an outside party like the WMF either. --Adamant1 (talk) 12:08, 21 April 2024 (UTC)[reply]
"they are the one's who ultimately have the say here" - that´s BS. The argumentation presented on the talk page by the activist is useless. The next guy can seed his AI work with Benjamin de Tudela as some obese, bald dude with a parrot on his shoulder and without the religious artifacts. The concept of teaching people anything with these interpretations has the built-in failure of making stuff up. Alexpl (talk) 13:27, 21 April 2024 (UTC)[reply]
Oh I totally agree. AI artwork is essentially worthless as teaching aid because every image it generates is completely different. The problem is that all it takes is a couple of mouth foamers like that user in the DR to derail any guideline proposal. To the point that I don't think we could even implement something saying that AI generated images of clearly made up fantasy settings or people should be banned. At least not without some kind of guidance from the WMF or serious change in the laws. I'd love to be proven wrong though. --Adamant1 (talk) 13:54, 21 April 2024 (UTC)[reply]
Please stop using the talk page as a place to attack other users. Thanks Trade (talk) 22:12, 22 April 2024 (UTC)[reply]
We can still document the limitations of AI image models as they currently exist. If advancements in technology address some of those limitations, we can update the page to reflect that. Omphalographer (talk) 17:03, 21 April 2024 (UTC)[reply]
If I understand you correctly, that would mean to keep this page up to date for some two dozen evolving AI-systems. Even tracking one such systems capabilities requieres a PhD-level amount of work and skill. Alexpl (talk) 09:25, 22 April 2024 (UTC)[reply]

Country of origin[edit]

Apologies if this has already been asked and I just missed it, but how is anyone on here realistically suppose to determine the country of origin for an AI generated image, or for that matter is there even one in a lot of cases to begin with (for instance with images generated by online tools)? Adamant1 (talk) 01:19, 27 April 2024 (UTC)[reply]

Did anyone ask you to name a country of origin for AI-work? Alexpl (talk) 10:29, 27 April 2024 (UTC)[reply]