Core Fillers Settings Storing Start


Info about this application

Below you find a very brief summary about the Concealed Information Test that is implemented in this app. To properly understand the test, please read the online documentation.

The standard and enhanced versions aim to reveal whether or not a certain information detail – the "probe" – is known to the tested person. The "irrelevant" and "target" items must be similar to the probe, and indistinguishable for a person who does not know the relevance of the probe (e.g. the probe is a stolen suitcase, and the irrelevant and targets are other suitcases). In this app, the input requires no designated "irrelevant" items: all items are potential probes, and the real probe can be given as any of these items. In the end, each of the items (except the target) will be automatically evaluated as potential probes (in each case with the given other four items, excluding the target, being irrelevants).

The enhanced version is much more effective than the standard; the latter is only for experimental purposes. The no-target version is a tentative approach in development to make the test applicable in cases where the probe is actually known to the participant.

At the end of the test there are no instructions or button, but only the text "Test completed." (in the selected language). This is to prevent subjects to see the results without permission. To show results, swipe right on the text.
Demo

For demonstration's sake, take a situation in which you want to reveal whether a person is aware of a certain date. For example, someone is suspected to be concealing the knowledge of an upcoming terrorist attack on May 9. In this case, five different, additional dates should be chosen randomly. For example, June 14, December 5, August 25, February 12, and October 23. From among these, one should be chosen randomly as target, for example, August 25. These should be filled in accordingly on the starting page: AUG 25 as Target, and the other dates as Probes (1-5). (In this case, the real "probe" is MAY 09, but the app automatically evaluates all items as potential probes.) The Subject ID could be "CIT_demo_suspect_01".

All these data can be filled in automatically for a demonstration using the following button:
Demo data loaded.



If you would like to do a pilot test to see that the method truly detects a relevant detail, one good way is to enter your own personal name (e.g. family name) as one probe and, as target and other probes, enter other, randomly chosen names. If you then complete the test in the enhanced version with, say, two blocks, it is very likely that you will be detected. (Personal names are typically highly personally relevant, and thus appear highly salient in the task.) For a control test, to show a no effect (simulating innocence), you may enter random names for all items. Note that, in this case, the person tested should not know which item is the presumed real probe item! The mere knowledge of the relevance of this item (as the probe to be tested) can cause different responding.
Results

Path to files on the device (tap to show) (tap to hide)
{{citP.path}}
{{citres.v.subj_id}} ({{citres.v.date}})

Summary CIT results for {{citP.cit_results.subj_id}}
(Date: {{citP.cit_results.date}})
RTP vs I
(ms)
RTP (SD)
(ms)
RTI (SD)
(ms)
dCIT
 
ARP vs I
(%)
ARP
(%)
ARI
(%)
Probe {{num}} {{citP.cit_results['probe' + num]['rt_p_vs_i']}} {{citP.cit_results['probe' + num]['rt_probe']}} ({{citP.cit_results['probe' + num]['rt_probe_sd']}}) {{citP.cit_results['probe' + num]['rt_irr']}} ({{citP.cit_results['probe' + num]['rt_irr_sd']}}) {{citP.cit_results['probe' + num]['dcit']}} {{citP.cit_results['probe' + num]['acc_p_vs_i']}} {{citP.cit_results['probe' + num]['acc_probe']}} {{citP.cit_results['probe' + num]['acc_irr']}}

Overall accuracy rates (number of correct responses per number of all trials): {{citP.cit_results.ar_overall}}.

Description of variables (tap to show) (tap to hide)
Response times (for all correct responses, in milliseconds): probe RT mean minus irrelevant RT mean (RTP vs I), probe RT mean and standard deviation (RTP (SD)), irrelevant RT mean and standard deviation (RTI (SD)), uncorrected Cohen's d between all probe and all irrelevant RTs (dCIT). Accuracy rates (ratio of correct responses compared to incorrect and too slow responses, in percentage): probe accuracy rate minus irrelevant accuracy rate (ARP vs I), probe accuracy rate (ARP), irrelevant accuracy rate (ARI). Responses with RT below 150 ms are excluded from all RT and AR analysis, except for the overall accuracy rate calculation. Trials (i.e., responses) from practice rounds are not included in any of the statistics above.

Evaluation using dCIT (tap to show) (tap to hide)
The dCIT measure might be used to evaluate whether or not a given probe was recognized. The dCIT typically falls between around −0.3 and 0.8, where a larger number always indicates a larger likelihood of the recognition of the given probe. As of yet, there is no established optimal cut-off value for the evaluation, but, based on a rough approximation in view of previous results, the following table depicts some possible evaluative labels for given boundaries.

dCIT > 0.4 strong indication of recognition
dCIT > 0.3 and dCIT <= 0.4 fair indication of recognition
dCIT > 0.1 and dCIT <= 0.3 weak indication of recognition
dCIT > 0 and dCIT <= 0.1 indeterminate
dCIT > −0.1 and dCIT <= 0 weak indication of non-recognition
dCIT <= −0.1 fair indication of non-recognition

(There is no strong indication of innocence: a very fast probe response is no more expected in case of innocence than in case of guilt.)

For more details, see the online documentation.



Path to full data file:
{{citP.path}}
File name:
{{citP.cit_results.file_nam_disp}}


Forward data with subject ID {{citP.cit_results.subj_id}}
(Date: {{citP.cit_results.date}})
The data can be sent (shared, stored) via any chosen local application (e.g. email, cloud storage, etc.).
 
You can also just simply copy the data to the clipboard. Afterwards it can be pasted into any other application.
 


Path to all CIT results files on the device:
{{citP.path}}
File name:
{{citP.cit_results.file_nam_disp}}

Subject ID:

Main items in the test
Target: Probe 1: Probe 2: Probe 3: Probe 4: Probe 5:


Target filler 1: Target filler 2: Target filler 3: Nontarget filler 1: Nontarget filler 2: Nontarget filler 3: Nontarget filler 4: Nontarget filler 5: Nontarget filler 6:

Choose language {{trP.lgs[lg_code]}} ({{lg_code.toUpperCase()}}) Version Enhanced Standard No-target Number of blocks 1 2 3 4 Response time limit (ms) Interstimulus interval min.-max. (ms) Auto-transform input to uppercase
Optionally, default email address (or addresses) can be given for data sending. The text given below will be filled in automatically (but can always be changed before sending) in the to: field when choosing an email application to send data (under the Results menu).
Default email(s)

Current settings can be stored in an online database, which can be reloaded using the given Identifier and Password. (See documentation for details.)
Identifier Password





Local offline storage on this device will simply reload current settings whenever the app starts. (Default values remain unchanged and can be reset anytime.)
Settings were saved.
"START CIT" also saves current settings Data sharing consent choices No Yes Yes but without items

Warning: it seems you are connected to the internet. It is best to turn it off to avoid interruptions.

There are duplicate item names: {{duplicates}}. All item names must be unique to start the test.

Please fill in all necessary details.





{{trP.consent[trP.lang]}} {{consentitems}}

{{trP.consent_q[trP.lang]}}









{{citP.feed_text}}

{{citP.stimulus_text}}

{{trP.taptostart[trP.lang]}}

{{trP.cit_completed[trP.lang]}}