Datasets:

Formats:
json
Size:
< 1K
ArXiv:
License:
Finch / README.md
HaoyuDong's picture
Update README.md
b5a79a9 verified
|
raw
history blame
3.14 kB
metadata
license: cc-by-3.0
task_categories:
  - text-generation
language:
  - en
tags:
  - multimodal
  - agent
  - workflow
  - spreadsheet
  - pdf
  - image
  - code
  - finance
  - accouning
modalities:
  - text
  - spreadsheet
  - pdf
  - image
  - code
configs:
  - config_name: Finch_Dataset_All
    data_files:
      - split: test
        path:
          - finch_workflows_test.jsonl

Finch cover figure

Finch: Benchmarking Finance & Accounting Workflows around Multimodal Enterprise Spreadsheets

This repository contains the dataset for Finch, an enterprise-level benchmark for evaluating an agent’s ability to act like a skilled finance & accounting expert on real-world workflows.


Dataset Description

Finch focuses on composite finance & accounting workflows that span:

data entry/import, structuring/formatting, web search, cross-sheet/file retrieval, calculation, financial modeling, validation, translation, visualization, and reporting.

The workflows are derived from real-world enterprise workspaces (Enron and various institutions/companies such as World Bank, Canada goverment, etc), including:

  • Large and messy spreadsheets with multimodal artifacts including text, tables, formulas, charts, pivots, images, etc
  • Linked PDFs and documents that provide additional business context

We adopt a three-step workflow labeling process:

  1. Summarizing workflow types supported by real collaborative enterprise email threads.
  2. Deriving concrete workflow instances from versioned spreadsheets and related files using LLMs.
  3. Meticulous expert annotation of instructions and reference outputs, involving hundreds of hours of expert work.

This process yields 172 enterprise-grade workflows—primarily multi-task composite — each with carefully written instructions and aligned input/reference files, capturing the intrinsic complexity, messiness, and multimodality of real-world finance & accounting work. In this release, we provide full annotations for the first 72 workflows, with the remaining 100 to be released in a subsequent update.

Experiment results show that even frontier agents solve fewer than 30% of the workflows, revealing a substantial performance gap for real-world enterprise scenarios.


📁 Dataset Structure

The instruction-tuning corpus is released in JSONL format.
Each line corresponds to one workflow-centric example:

{
  "id": "<workflow identifier>",
  "instruction_en": "<English task instruction for a finance & accounting workflow>",
  "source_files": ["<input file name>", "..."],
  "source_files_urls": ["<input file download URL>", "..."],
  "reference_outputs": {
    "files": ["<reference output file name>"],
    "text": "<textual reference output>"
  },
  "reference_file_urls": ["<reference output file download URL>"],
  "task_type": "<task category (e.g., reporting, modeling)>",
  "business_type": "<business domain (e.g., budgeting, trading)>"
}