Commons talk:WMF support for Commons/Commons Impact Metrics/Instructions:Commons Impact Metrics Prototype User Testing

From Wikimedia Commons, the free media repository
Jump to navigation Jump to search

Your Feedback needed

[edit]

After you have reviewed the data model and tinkered with the sample data, please help us answer the following questions:

How do you feel about our approach to the category tree?

What questions are you able to answer with these datasets?

Do the datasets miss any questions / use cases you may have?

How might we leverage Commons Impact Metrics data to support existing tools? What would be required to do so? - Udehb-WMF (talk) 14:32, 11 December 2023 (UTC)[reply]

Feedback

[edit]

Thanks for this - I've had a bit of a play around and here are my responses to the questions:

How do you feel about our approach to the category tree? In my previous role as Wikimedian in Residence, I had difficulty with the category tree being used as the foundation for statistics. Subcategories within my organisation's category had non-org files added to them, so the statistics were always inaccurate - they didn't represent the files uploaded by the organisation. It would be really helpful to be able to use another measure (e.g. a source template used on a page) to perform the same actions as are available in these prototypes.

As an example of what I mean and how this came about, in case useful: someone created a subcat for a particular incunable held at the org - when others uploaded images from their version of the incunable, they found a category with the title of the publication and added this to their work - so images not belonging to the org were nested within the category tree for the org and influenced the statistics.

This may not be an issue for people uploading more specific / small sets of images, the set to which I'm referring was unusually large.

What questions are you able to answer with these datasets? Many of the key questions are largely answered with these datasets - what is used, where, and how much. These basic metrics help people to report on the success of engaging with Wikipedia, and help motivate uploaders to edit after an upload so that files are actually used rather than just deposited.

Do the datasets miss any questions / use cases you may have? In my experience, organisations are very much interested in how files are edited once uploaded. This is both from a cautious perspective (reassurance about ability to prevent vandalism) but increasingly (and more excitingly!) also from the perspective of what the organisation can learn from communities beyond its own walls.

This could crudely be achieved by providing either edit summary or number of bytes added/removed figures to the Commons Edits prototype. On one level, this would let editors/orgs commend those who add lots of information (not always the same as lots of edits!) and on another, these sorts of data could help create opportunities for knowledge exchange and help GLAMs to find qualitative engagement as well as the quantitative.

Going beyond the basic, it would be amazing if it were possible to pull in any structured data added to files on Commons (even more so if it were possible to specify language labels for the structured data).

How might we leverage Commons Impact Metrics data to support existing tools? What would be required to do so? Not sure about this one!

Zeromonk (talk) 15:52, 17 January 2024 (UTC)[reply]