Published October 2, 2024 | Version 1.0.2
Software Open

AI-CRP: An AI-Cowriting Research Platform

Description

This repository contains source code for an AI-Cowriting Research Platform (AI-CRP) that supports empirical studies of writers' interactions with large language models as shown in Williams-Ceci, Sterling, et al. "Bias in AI Autocomplete Suggestions Leads to Attitude Shift on Societal Issues." and Jakesch, Maurice, et al. "Co-writing with opinionated language models affects users’ views."

The repository consists of HTML and JS files for a writing app with an AI writing assistant, a set of serverless functions that generate writing suggestions and store participants' interaction data in the backend, as well as code snippets for integrating the experimental app into a Qualtrics survey. We also include instructions for serving the app through Firebase below.

Setting up the dependencies, the firebase project, and configuring the platform takes about 30 minutes. This software has been tested in MacOS 14.6.1 with npm v. 10.8.3, Node v. 22.9.0, and Firebase v. 13.20.2). For questions, contact Maurice Jakesch or one of the study authors.

Overview of the research platform

An overview of the software architecture is provided in `architecture.png`. This software does not rely on preexisting data for the suggestions; instead, it generates text suggestions based on what users write. As a participant writes a their text, the text is sent to GPT with a request for completion on the backend. GPT generates a completion that takes into account instructions provided by the researcher. The suggestions are logged in the backend and delivered to the interface frontend, where they are shown dynamically. After the participant has written their response, the app records variables such as the total number of suggestions accepted by the user (e.g. “acceptedSuggestions“), stores them in a database, and returns them to the Qualtrics parent window. 

Requirements and setting up the project

Download the repository, install Firebase (see Firebase Quickstart, Node.js v.22 and npm. Login (firebase login) and initialize a new Firebase project in the repository's main folder (firebase init). Select Firestore, Hosting, Functions, and Emulators as features you want to set up. In the following steps, give your project a name and accept the defaults.

You should now see "Your Firebase project is ready!" with the project information. There will also be an warning about Cloud Firestore, which we will address next: To do so, view the new Google Platform project in the Firebase Console. Here, select Build > Firestore Database > Create database > Create to set up the Firestore database. Also, upgrade the project from the free Spark plan to the paid Blaze plan (bottom left or under Settings > Usage and billing) to use the Cloud Functions in the deployed version of the app (not needed for local testing).

Finally, return to your local repository, navigate to the functions directory, and run npm install to install the dependencies listed in functions/package.json. Open the file functions/index.js and insert the your OpenAI secret key in the authorization header at the top with the word "Bearer" before it. The line of code should look like this: "Authorization": "Bearer [YOUR API KEY]".

How to test the application 

In the main repository, run firebase serve. This will start a local server with emulators for the Firebase functions to test and debug the application. The CLI will show the local server port, which you can click on to view the app, as well as the endpoints for your the Firebase functions. You will need to connect the frontend to the function endpoints: Take note of the function endpoints (the URLs in the console output following http function initialized that start with https://localhost), and open the file /public/resources/assistant.js. Here, insert your local function endpoint (without the function name) at line 5 after var backend = (ignore line 2 for now; this is the endpoint for the deployment in the step below). The code already contains a template of the URL -- you will mostly need to insert your project name and make sure the port and areas are correct.

Finally, open the local server URL in your browser. If everything is set up correctly, you will see the writing interface and the assistant will start generating writing suggestions automatically, as shown in the screenshot.png file. If not, check your browser JS console and the output of your CLI command to see what's going on. Likely, you have not set the correct endpoints in /public/resources/assistant.js or you are missing the OpenAI authentication bearer in functions/index.js.

How to deploy your application

In the main repository, run firebase deploy. The CLI will host the HTML files in the public folder, and create the serverless cloud functions. Take note of the cloud function root URL in the output (without the function name) and insert it in /public/resources/assistant.js as the cloud backend (line 2). Deploy the app again to reflect the new backend in the hosted app; since we did not update the function code, the quicker firebase deploy --except functions command suffices. Finally, open the hosting URL shown in your browser to view the web app and check if the assistant is working. If not, check the browser console and the Firebase Functions dashboard for errors.

How to customize the app

Adjustments can be implemented in the /public/index.html file, so you can work with several versions of the file in parallel and test them locally. In the HTML file, you can adjust the instructions and the interface's appearance. At the bottom of the file, you find the initialization parameters of the assistant. The key parameters here are the generation_temperature and the systemPrompt. The system prompt steers the assistant's behavior, for example, by imbuing it with a specific opinion, as in the example shown. It is selected from a list of prompts based on the participantGroup variable, which is initialized from the call URL. To use a text continuation instead of a chat completion engine, define a generationPrefix instead of a systemPrompt. For changing the behavior of the writing assistant or the general style of the interface, look into /public/assistant.js and /public/styles.css respectively.

How to integrate the app with Qualtrics

Paste the code included in qualtrics_integration.html into a survey question, selecting the HTML view when editing the question, not the Rich Content Editor. Also, make sure that your Qualtrics license includes iFrames and JavaScript. Adjust the src=https://<YOUR URL>/index.html?rid=.. parameter of the iframe to match the URL of your Firebase deployment and initialize the following fields in the Qualtrics survey flow: group, essay, accepted_suggestions, requested_suggestions, suggestions_visible_time. Set the group parameter in the logic of the Qualtrics survey flow based on your experiment -- it will be passed to the app to select the matching system prompt.

To show no suggestions, set group to "control". The remaining parameters can be left empty and are populated by the script when users submit their writing. The text participants wrote will be in the essay field, with text accepted from the AI underlined. The other parameters will also be populated by the script. For a more detailed analysis of interactions, take a look at the data in the Firestore, where every single user action and suggestions are saved and time-stamped. Note that the snippet will also hide the survey continuation button, as survey continuation is triggered directly by the web app.

How to cite the use of this platform

When using this platform in your research, kindly cite it as: Jakesch, M., Bhat, A., Kadoma, K., Williams-Ceci, S., Zalmanson, L., & Naaman, M. (2024). AI-CRP: An AI-Cowriting Research Platform (Version 1.0.0) [Computer software]. https://doi.org/10.5281/zenodo.13149126.

Files

mauricejk/ai-cowriting-research-platform-1.0.2.zip

Files (905.4 kB)

Additional details