Crafting Real-World Queries: MS MARCO Web Search's Authentic Data
文章描述了一个大规模高质量查询-文档相关性数据集的构建过程,通过筛选 Bing 搜索引擎一年的日志数据,去除低效或不合规的查询,最终得到反映真实商业搜索引擎分布的高质量数据集,并将其划分为训练、开发和测试集,用于模型训练与评估,同时扩展至更大规模的数据集构建。 2025-6-29 14:30:3 Author: hackernoon.com(查看原文) 阅读量:7 收藏

To generate large scale high quality queries and query-document relevance labels, we sample query-document clicks from one year of Bing search engine’s logs. The initial query set gets filtered to remove queries that are rarely triggered, contain personally identifiable information, offensive content, adult content and those having no click connection to the ClueWeb22 document set. The resulting set includes queries triggered by many users, which reflects the real query distribution of a commercial web search engine.

The queries are split into train and test sets based on time, which is similar to real-world web scenarios training an embedding model using past data and serving future incoming web pages and queries. We sample around 10 million query-document pairs from the train set and 10 thousand query-document pairs from the test set. The documents in the query-document train and test sets are then merged into the 100 million train document set and test document set respectively (shown in right part of figure 1). To enable quality verification of the model during training, we split a dev query-document set from the train query-document set. Since the train and dev sets share the same document set, the dev set can be used to quickly verify the training correctness and model quality during training. For the 10B dataset, we use the same train, dev, and test queries but sample more query-document pairs.

Authors:

(1) Qi Chen, Microsoft Beijing, China;

(2) Xiubo Geng, Microsoft Beijing, China;

(3) Corby Rosset, Microsoft, Redmond, United States;

(4) Carolyn Buractaon, Microsoft, Redmond, United States;

(5) Jingwen Lu, Microsoft, Redmond, United States;

(6) Tao Shen, University of Technology Sydney, Sydney, Australia and the work was done at Microsoft;

(7) Kun Zhou, Microsoft, Beijing, China;

(8) Chenyan Xiong, Carnegie Mellon University, Pittsburgh, United States and the work was done at Microsoft;

(9) Yeyun Gong, Microsoft, Beijing, China;

(10) Paul Bennett, Spotify, New York, United States and the work was done at Microsoft;

(11) Nick Craswell, Microsoft, Redmond, United States;

(12) Xing Xie, Microsoft, Beijing, China;

(13) Fan Yang, Microsoft, Beijing, China;

(14) Bryan Tower, Microsoft, Redmond, United States;

(15) Nikhil Rao, Microsoft, Mountain View, United States;

(16) Anlei Dong, Microsoft, Mountain View, United States;

(17) Wenqi Jiang, ETH Zürich, Zürich, Switzerland;

(18) Zheng Liu, Microsoft, Beijing, China;

(19) Mingqin Li, Microsoft, Redmond, United States;

(20) Chuanjie Liu, Microsoft, Beijing, China;

(21) Zengzhong Li, Microsoft, Redmond, United States;

(22) Rangan Majumder, Microsoft, Redmond, United States;

(23) Jennifer Neville, Microsoft, Redmond, United States;

(24) Andy Oakley, Microsoft, Redmond, United States;

(25) Knut Magne Risvik, Microsoft, Oslo, Norway;

(26) Harsha Vardhan Simhadri, Microsoft, Bengaluru, India;

(27) Manik Varma, Microsoft, Bengaluru, India;

(28) Yujing Wang, Microsoft, Beijing, China;

(29) Linjun Yang, Microsoft, Redmond, United States;

(30) Mao Yang, Microsoft, Beijing, China;

(31) Ce Zhang, ETH Zürich, Zürich, Switzerland and the work was done at Microsoft.


文章来源: https://hackernoon.com/crafting-real-world-queries-ms-marco-web-searchs-authentic-data?source=rss
如有侵权请联系:admin#unsafe.sh