LLM
Llm Data Processing

LLM Data Processing

Data protection

Presidio

Presidio - Data Protection and De-identification SDK Context aware, pluggable and customizable PII de-identification service for text and images.

https://github.com/microsoft/presidio (opens in a new tab)

Presidio (Origin from Latin praesidium ‘protection, garrison’) helps to ensure sensitive data is properly managed and governed. It provides fast identification and anonymization modules for private entities in text such as credit card numbers, names, locations, social security numbers, bitcoin wallets, US phone numbers, financial data and more.

https://microsoft.github.io/presidio/ (opens in a new tab)

Data preparation

https://github.com/bigscience-workshop/data-preparation (opens in a new tab)

Code used for sourcing and cleaning the BigScience ROOTS corpus

https://bigscience.huggingface.co/ (opens in a new tab)

Data processing in Dolma

  • Extract useful stuff from /datasets/dolma
    • design principles
    • construction
    • contents
    • analyses
    • data curation practices
    • data curation toolkit
    • data curation

Data annotation in S2ORC

/datasets/allenai-s2orc-s2ag#s2orc

Some useful insights about data annotation might be derived from S2ORC: The Semantic Scholar Open Research Corpus (opens in a new tab)

  • We introduce S2ORC, a large corpus of 81.1M English-language academic papers spanning many academic disciplines. The corpus consists of rich metadata, paper abstracts, resolved bibliographic references, as well as structured full text for 8.1M open access papers. Full text is annotated with automatically-detected inline mentions of citations, figures, and tables, each linked to their corresponding paper objects. In S2ORC, we aggregate papers from hundreds of academic publishers and digital archives into a unified source, and create the largest publicly-available collection of machine-readable academic text to date. We hope this resource will facilitate research and development of tools and tasks for text mining over academic text.

Data Processing in Semantic Scholar

allenai.org/ai-for-science (opens in a new tab)

A lof of useful insights for data processing could be derived from the paper The Semantic Scholar Open Data Platform (opens in a new tab)

This paper combines public and proprietary data sources using state-of-theart techniques for scholarly PDF content extraction and automatic knowledge graph construction to build the Semantic Scholar Academic Graph, the largest open scientific literature graph to-date.

The volume of scientific output is creating an urgent need for automated tools to help scientists keep up with developments in their field. Semantic Scholar (S2) is an open data platform and website aimed at accelerating science by helping scholars discover and understand scientific literature. We combine public and proprietary data sources using state-of-theart techniques for scholarly PDF content extraction and automatic knowledge graph construction to build the Semantic Scholar Academic Graph, the largest open scientific literature graph to-date, with 200M+ papers, 80M+ authors, 550M+ paper-authorship edges, and 2.4B+ citation edges. The graph includes advanced semantic features such as structurally parsed text, natural language summaries, and vector embeddings. In this paper, we describe the components of the S2 data processing pipeline and the associated APIs offered by the platform. We will update this living document to reflect changes as we add new data offerings and improve existing services.


Process LLM's training data

  • datatrove (opens in a new tab)
    • Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
    • DataTrove is a library to process, filter and deduplicate text data at a very large scale. It provides a set of prebuilt commonly used processing blocks with a framework to easily add custom functionality.
    • DataTrove processing pipelines are platform-agnostic, running out of the box locally or on a slurm cluster. Its (relatively) low memory usage and multiple step design makes it ideal for large workloads, such as to process an LLM's training data.