Full corpus: To seed our corpus, we manually curated a selection of 23 highly-cited survey papers that provide comprehensive overviews of the state of automated misinformation detection at the time of writing. We relied on these papers to identify detection methods that have been well-received by the research community. We searched for survey papers in Google Scholar with queries ``survey misinformation detection'' and ``survey fake news detection'' and collected the most-frequently cited papers within the past 10 years.\footnote{This time frame was naturally enforced by a lack of well-cited older publications, and was not fixed before we began our sampling process.} We then inspected each paper's references for related work; we read the abstracts for these works in order to confirm relevance. We supplemented this corpus of papers with publications surfaced by Google Scholar queries. We queried the following terms: ``misinformation detection x'' and ``automated fact checking x,'' where x is an element in the set {claims, news articles, accounts, networks, websites, influence operations}. We collected the 50 most highly-cited papers in the set resulting from the union of search results returned by both search queries for each $x$. To counter potential bias toward older publications, we collected papers with the highest citation rates per year. These search terms are deliberately over-inclusive; we manually review all works for relevance after the initial sampling step. After removal of out-of-scope works (see {In- and out-of-scope work}) from this set of 250 papers, 219 eligible papers remained. To ensure that security-oriented approaches to detection were represented in our corpus, we conducted a separate snowball sampling search for work published in four A* security research venues (USENIX Security, IEEE S\&P, NDSS, and ACM CCS), using the keywords listed previously. This resulted in the addition of 29 works, mostly addressing the detection of accounts and networks that spread misinformation. Our final corpus comprises 248 papers published between 2009 and 2024, inclusive. Focus corpus: To develop our focus corpus, we identified papers with sufficient responses to each coding field in our design taxonomy (found on page 4): One coder manually reviewed annotations for the full corpus and noted papers that provide at least general descriptions of their choice of dataset, features, and model. From these papers, we then sampled works from each scope in proportion to that scope’s representation in the full corpus, oversampling within each scope for diverse methods and data. This corpus comprised 87 papers.