共享资源:SPARQL查询服务

From Wikimedia Commons, the free media repository
Jump to navigation Jump to search
This page is a translated version of a page Commons:SPARQL query service and the translation is 41% complete. Changes to the translation template, respectively the source language can be submitted through Commons:SPARQL query service and have to be approved by a translation administrator.

访问维基媒体共享资源查询服务 

如何使用

 此页面需要更新,加入更多有关如何使用共享资源查询服务的信息。

共享资源查询服务使用Wikibase,维基数据查询帮助提供有相关使用文档,但本站特有的“M ID”部分在下方阐述。

M ID

M ID是共享资源在查询服务中的特有部分,它是共享资源上每个文件的唯一标识符,等同于维基数据中的Q ID。

找到M ID

单个文件

要查找单个图像的M ID,只需查看文件,在左侧菜单中找到“概念URI”,右击并复制链接,其中就包含M ID。也可查看“页面信息”,其中的“页面ID”追加一个前缀“M”即为所需的M ID。

多个文件
  • PetScan可用来查找维基媒体共享资源分类中的所有文件的M ID。查找共享资源分类的名称并选择如下选项:
# Language = commons
# Project = wikimedia
# Categories = Name of the category (replacing spaces in the name with _)
# Combination = Intersection
# Go to the 'Page properties' tab and under 'Namespaces' click the 'file' box
In the results the 'Page ID' is the M ID (the M must be added by the user). The results can be either copied manually or under the 'Output' tab there are a range of options for export.
  • Minefield can be used to convert a list of Commons file page titles to media M IDs

API endpoint

Because WCQS is an authenticated service, it is currently not as easy or straightforward to use it as an API endpoint in a way you might expect if you are familiar with Wikidata's unauthenticated endpoint.

See Commons:SPARQL query service/API endpoint for details on how to programmatically access the query service.

发行说明

This is a beta SPARQL endpoint exposing the Structured Data on Commons (SDoC) dataset. This endpoint can federate with WDQS. More work is needed as we iterate on the service, but feel free to begin using the endpoint. Known limitations are listed below:

  • The service is a beta endpoint that is updated via weekly dumps. Some caveats include limited performance, expected downtimes, and no interface, naming, or backward compatibility stability guarantees.

The service is hosted on Wikimedia Cloud Services, with limited resources and limited monitoring. This means there may be random unplanned downtime. The data will be reloaded weekly on Mondays from dumps taken on Sunday. The dumps can be seen at https://dumps.wikimedia.org/other/wikibase/commonswiki/. The service will be down during data reload. With the current amount of SDoC data, downtime will last approximately 4 hours, but this may increase as SDoC data grows.

  • Due to an issue with the dump format, the data currently only dates back to July 5th. We’re working on getting more up-to-date data and hope to have a solution soon. (T258507 and T258474)
  • The MediaInfo concept URIs (e.g. http://commons.wikimedia.org/entity/M37200540) are currently HTTP; we may change these to HTTPS in the near future. Please comment on T258590 if you have concerns about this change.
  • Please note that to correctly logout of the service, you need to use the logout link in WCQS - logging out of just Wikimedia Commons will not work for WCQS. This limitation will be lifted once we move to production.
  • Please use the SPARQL template. Note that while there is currently a bug that doesn’t allow us to change the “Try it!” link endpoint, the examples will be displayed correctly on the WCQS GUI.
  • WCQS is a work in progress and some bugs are to be expected, especially related to generalizing WDQS to fit SDoC data. For example, current bugs include:
  • URI prefixes specific for SDoC data don’t yet work - you need to use full URIs if you want to query using them. Relations and Q items are defined by :* Autocomplete for SDoC items doesn’t work - without prefixes they’d be unusable anyway, but additional work will be required after we inject SDoC URI prefixes into WCQS GUI.
  • If you find any additional bugs or issues, please report them via Phabricator with the tag wikidata-query-service.
未來计划

我们确实计划转移该服务到生产环境,但尚无相关时间表。必须强调的是,虽然我们确实希望SPARQL端点成为中长期解决方案的一部分,但它只是该解决方案的一部分。即使该服务做好生产环境的准备,它仍然会在超时、昂贵查询和联合等方面受到限制。随时间推移,一些用例需要迁移到更好的解决方案——一旦方案存在。