Published June 26, 2024 | Version v1
Book chapter Open

Exploratory and Confirmatory Prompt Engineering

  • 1. ROR icon Humboldt-Universität zu Berlin

Description

In software development, components utilizing large language models (LLMs) can be easily deployed for specific tasks. LLMs are particularly useful for tasks that are expensive to code manually. However, for successful integration, the generated outputs must meet specific requirements for further processing. This paper introduces an evaluation instrument for exploratory and confirmatory prompt engineering, utilizing prompt templates. A direct evaluation methodology is presented to quantitatively assess prompt outputs. Additionally, a methodology is introduced where LLMs generate ratings that are evaluated for human alignment. Based on the results, the most promising prompt templates can be identified. The evaluation instrument introduced in this paper should be considered when designing software components that utilize high-quality LLM-generated content.

cite as: 
S. Rüdian, "Exploratory and Confirmatory Prompt Engineering", in Educational Prompt Engineering, July 2024, Berlin, DE, pp. 1-6. doi:10.5281/zenodo.12549309

BibTex:
@inbook{Rüdian2024e,
  author = "S. Rüdian",
  title = "Exploratory and Confirmatory Prompt Engineering",
  booktitle = "Educational Prompt Engineering",
  pages = "1-6",
  month = "July",
  year = 2024,
  address = {Berlin, DE},
  doi = "10.5281/zenodo.12549309"
}

Files

exploratory-and-confirmatory-prompt-engineering.pdf

Files (184.2 kB)