Liqs commited on
Commit
3dd1714
·
verified ·
1 Parent(s): 7314cd1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -1
README.md CHANGED
@@ -131,4 +131,30 @@ We do however note a severe skew towards Python and Java with only 3.8% of sampl
131
  | hard | 0.299672 |
132
  | medium | 0.183861 |
133
 
134
- **Languages** We note that the text data in this dataset consists mostly of: commit messages, comments and is primarily in English. We do however not filter for any human languages explcitly.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
  | hard | 0.299672 |
132
  | medium | 0.183861 |
133
 
134
+ **Languages** We note that the text data in this dataset consists mostly of: commit messages, comments and is primarily in English. We do however not filter for any human languages explcitly.
135
+
136
+ # Cite Us
137
+ ```bibtex
138
+ @inproceedings{lindenbauer-etal-2025-gitgoodbench,
139
+ title = "{G}it{G}ood{B}ench: A Novel Benchmark For Evaluating Agentic Performance On Git",
140
+ author = "Lindenbauer, Tobias and
141
+ Bogomolov, Egor and
142
+ Zharov, Yaroslav",
143
+ editor = "Kamalloo, Ehsan and
144
+ Gontier, Nicolas and
145
+ Lu, Xing Han and
146
+ Dziri, Nouha and
147
+ Murty, Shikhar and
148
+ Lacoste, Alexandre",
149
+ booktitle = "Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025)",
150
+ month = jul,
151
+ year = "2025",
152
+ address = "Vienna, Austria",
153
+ publisher = "Association for Computational Linguistics",
154
+ url = "https://aclanthology.org/2025.realm-1.19/",
155
+ doi = "10.18653/v1/2025.realm-1.19",
156
+ pages = "272--288",
157
+ ISBN = "979-8-89176-264-0",
158
+ abstract = "Benchmarks for Software Engineering (SE) AI agents, most notably SWE-bench, have catalyzed progress in programming capabilities of AI agents. However, they overlook critical developer workflows such as Version Control System (VCS) operations. To address this issue, we present GitGoodBench, a novel benchmark for evaluating AI agent performance on Version Control System (VCS) tasks. GitGoodBench covers three core Git scenarios extracted from permissive open-source Python, Java, and Kotlin repositories. Our benchmark provides three datasets: a comprehensive evaluation suite (900 samples), a rapid prototyping version (120 samples), and a training corpus (17,469 samples). We establish baseline performance on the prototyping version of our benchmark using GPT-4o equipped with custom tools, achieving a 21.11{\%} solve rate overall. We expect GitGoodBench to serve as a crucial stepping stone toward truly comprehensive SE agents that go beyond mere programming."
159
+ }
160
+ ```