Skip to content

Commit eaac392

Browse files
authored
Clean evidence infer treatment. (#705)
* Clean subset 2.0. * Clean subset 1.1. * Minor change to trigger tests. * Revert "Minor change to trigger tests." This reverts commit 0a533ee.
1 parent 77fef9a commit eaac392

2 files changed

Lines changed: 114 additions & 138 deletions

File tree

Lines changed: 74 additions & 86 deletions
Original file line numberDiff line numberDiff line change
@@ -1,127 +1,115 @@
11
dataset: evidence_infer_treatment
22
subset: '1.1'
33
templates:
4-
1b538c15-d7b7-4139-8755-fb7d28c19a4d: !Template
4+
6148ddc8-b722-4d4f-8498-ff36c45803d6: !Template
55
answer_choices: null
6-
id: 1b538c15-d7b7-4139-8755-fb7d28c19a4d
6+
id: 6148ddc8-b722-4d4f-8498-ff36c45803d6
77
jinja: "{% set annotation_length = Prompts.Annotations | length %}\n\n{% set specific_sub_annotation\
88
\ = range(0, annotation_length) | choice %}\n\n{% set sub_annotation_length\
99
\ = Prompts.Annotations[specific_sub_annotation].Annotations | length %}\n\n\
1010
{% set sub_sub_annotation = [0] %}\n\n{% if sub_annotation_length > 0 %}\n\n\
11-
{{Text[:1200]}} \n\n{{Text[-300:]}}\n\nThe text above contains important details\
12-
\ for answering the following questions:\n\nThe relevant annotations:\n\n{{\
13-
\ sub_sub_annotation.pop() | replace(0, \"\") }}\n{{ sub_sub_annotation.append(range(0,\
14-
\ sub_annotation_length) | choice) | replace(None, \"\") }}\n\n{{Prompts.Annotations[specific_sub_annotation].Annotations[sub_sub_annotation[0]]}}\n\
15-
\nNow on the basis of annotation and the text the outcome is:\n\n{% endif %}\n\
16-
\n|||\n\n\n{{Prompts.Outcome[specific_sub_annotation]}}"
11+
The following text snippets contain important information:\n\n{{Text[:1200]}}\
12+
\ \n\n{{Text[-300:]}}\n\nThe relevant annotations are:\n\n{{ sub_sub_annotation.pop()\
13+
\ | replace(0, \"\") }}\n{{ sub_sub_annotation.append(range(0, sub_annotation_length)\
14+
\ | choice) | replace(None, \"\") }}\n\n{{Prompts.Annotations[specific_sub_annotation].Annotations[sub_sub_annotation[0]]}}\n\
15+
\nRegarding the following comparator\n\n{{Prompts.Comparator[specific_sub_annotation]}},\n\
16+
\nthe intervention was\n\n{% endif %}\n\n|||\n\n\n{{Prompts.Intervention[specific_sub_annotation]}}.\n"
1717
metadata: !TemplateMetadata
18-
choices_in_prompt: null
19-
metrics: []
18+
choices_in_prompt: false
19+
metrics:
20+
- Accuracy
2021
original_task: false
21-
name: template_4
22+
name: Identify intervention
2223
reference: ''
23-
7ce46648-2bcc-4e67-95f5-c2a0d0612f9b: !Template
24+
9ea1dca5-1867-48f6-9a0f-1c55b19c4606: !Template
2425
answer_choices: null
25-
id: 7ce46648-2bcc-4e67-95f5-c2a0d0612f9b
26+
id: 9ea1dca5-1867-48f6-9a0f-1c55b19c4606
2627
jinja: "{% set annotation_length = Prompts.Annotations | length %}\n\n{% set specific_sub_annotation\
2728
\ = range(0, annotation_length) | choice %}\n\n{% set sub_annotation_length\
2829
\ = Prompts.Annotations[specific_sub_annotation].Annotations | length %}\n\n\
2930
{% set sub_sub_annotation = [0] %}\n\n{% if sub_annotation_length > 0 %}\n\n\
30-
{{ sub_sub_annotation.pop() | replace(0, \"\") }}\n{{ sub_sub_annotation.append(range(0,\
31-
\ sub_annotation_length) | choice) | replace(None, \"\") }}\n\nAfter reading\
32-
\ the following text:\n\n{{Text[:1200]}} \n\n{{Text[-300:]}}\n\nThe relevant\
33-
\ annotations:\n\n{{Prompts.Annotations[specific_sub_annotation].Annotations[sub_sub_annotation[0]]}}\n\
34-
\nNow if the comparator is:\n\n{{Prompts.Comparator[specific_sub_annotation]}}.\n\
35-
\nand the intervention is:\n\n{{Prompts.Intervention[specific_sub_annotation]}}.\n\
36-
\n The outcome is: \n\n{% endif %}\n\n|||\n\n{{Prompts.Outcome[specific_sub_annotation]}}"
31+
The first text snippet that is important to understand is:\n\n{{Text[:1200]}}\
32+
\ \n\nthe second text snippet is:\n\n{{Text[-300:]}}\n\nThe relevant annotations:\n\
33+
\n{{ sub_sub_annotation.pop() | replace(0, \"\") }}\n{{ sub_sub_annotation.append(range(0,\
34+
\ sub_annotation_length) | choice) | replace(None, \"\") }}\n\n{{Prompts.Annotations[specific_sub_annotation].Annotations[sub_sub_annotation[0]]}}\n\
35+
\nRegarding the intervention\n\n{{Prompts.Intervention[specific_sub_annotation]}}\n\
36+
\nwith the outcome\n\n{{Prompts.Outcome[specific_sub_annotation]}},\n\nthe comparator\
37+
\ was:\n\n{% endif %}\n\n|||\n\n{{Prompts.Comparator[specific_sub_annotation]}}."
3738
metadata: !TemplateMetadata
38-
choices_in_prompt: null
39-
metrics: []
39+
choices_in_prompt: false
40+
metrics:
41+
- Accuracy
4042
original_task: false
41-
name: template_2
42-
reference: ''
43-
7d618260-32fb-405d-ab79-cec67f589de7: !Template
44-
answer_choices: null
45-
id: 7d618260-32fb-405d-ab79-cec67f589de7
46-
jinja: "{% set annotation_length = Prompts.Annotations | length %}\n\n{% set specific_sub_annotation\
47-
\ = range(0, annotation_length) | choice %}\n\n{% set sub_annotation_length\
48-
\ = Prompts.Annotations[specific_sub_annotation].Annotations | length %}\n\n\
49-
{% set sub_sub_annotation = [0] %}\n\n{% if sub_annotation_length > 0 %}\n\n\
50-
Read the following text:\n\n{{ sub_sub_annotation.pop() | replace(0, \"\") }}\n\
51-
{{ sub_sub_annotation.append(range(0, sub_annotation_length) | choice) | replace(None,\
52-
\ \"\") }}\n\n{{Text[:1200]}} \n\n{{Text[-300:]}}\n\nNow the comparator is:\n\
53-
\n{{Prompts.Comparator[specific_sub_annotation]}}.\n\nThe intervention is:\n\
54-
\n{{Prompts.Intervention[specific_sub_annotation]}}.\n\nThe outcome:\n\n{{Prompts.Outcome[specific_sub_annotation]}}\n\
55-
\nis either {{\"significantly increased\"}}, {{\"significantly decreased\"}}\
56-
\ or {{\"no significant difference\"}}. Which is it?\n\n{% endif %}\n\n|||\n\
57-
\n{% if sub_annotation_length > 0 %}\n\n{{Prompts.Annotations[specific_sub_annotation].Label[sub_sub_annotation[0]]}}\n\
58-
\n{% endif %}"
59-
metadata: !TemplateMetadata
60-
choices_in_prompt: null
61-
metrics: []
62-
original_task: true
63-
name: template_3
43+
name: Identify comparator
6444
reference: ''
65-
c999469a-20e0-4c10-a707-3c057d5c0245: !Template
66-
answer_choices: null
67-
id: c999469a-20e0-4c10-a707-3c057d5c0245
45+
bf430e30-a6a4-4bc0-a304-bbc1a06e23fd: !Template
46+
answer_choices: significantly increased ||| significantly decreased ||| no significant
47+
difference
48+
id: bf430e30-a6a4-4bc0-a304-bbc1a06e23fd
6849
jinja: "{% set annotation_length = Prompts.Annotations | length %}\n\n{% set specific_sub_annotation\
6950
\ = range(0, annotation_length) | choice %}\n\n{% set sub_annotation_length\
7051
\ = Prompts.Annotations[specific_sub_annotation].Annotations | length %}\n\n\
7152
{% set sub_sub_annotation = [0] %}\n\n{% if sub_annotation_length > 0 %}\n\n\
72-
The following text snippets contain important information:\n\n{{Text[:1200]}}\
73-
\ \n\n{{Text[-300:]}}\n\nThe relevant annotations are:\n\n{{ sub_sub_annotation.pop()\
53+
The information required to understand the outcome is below:\n\n{{Text[:1200]}}\
54+
\ \n\n{{Text[-300:]}}\n\nThe relevant annotations:\n\n{{ sub_sub_annotation.pop()\
7455
\ | replace(0, \"\") }}\n{{ sub_sub_annotation.append(range(0, sub_annotation_length)\
7556
\ | choice) | replace(None, \"\") }}\n\n{{Prompts.Annotations[specific_sub_annotation].Annotations[sub_sub_annotation[0]]}}\n\
76-
\nNow if the comparator is:\n\n{{Prompts.Comparator[specific_sub_annotation]}}.\n\
77-
\nThe intervention will be:\n\n{% endif %}\n\n|||\n\n\n{{Prompts.Intervention[specific_sub_annotation]}}.\n"
57+
\nConsider the intervention\n\n{{Prompts.Intervention[specific_sub_annotation]}}\n\
58+
\nwith respect to the comparator\n\n{{Prompts.Comparator[specific_sub_annotation]}}.\n\
59+
\nThe outcome\n\n{{Prompts.Outcome[specific_sub_annotation]}}\n\nis either {{\"\
60+
significantly increased\"}}, {{\"significantly decreased\"}} or {{\"no significant\
61+
\ difference\"}}. Which is it?\n\n{% endif %}\n\n|||\n\n{% if sub_annotation_length\
62+
\ > 0 %}\n\n{{Prompts.Annotations[specific_sub_annotation].Label[sub_sub_annotation[0]]}}\n\
63+
\n{% endif %}"
7864
metadata: !TemplateMetadata
79-
choices_in_prompt: null
80-
metrics: []
81-
original_task: false
82-
name: template_1
83-
reference: ''
84-
da67a99f-0472-4658-a410-afe260749d90: !Template
85-
answer_choices: null
86-
id: da67a99f-0472-4658-a410-afe260749d90
65+
choices_in_prompt: true
66+
metrics:
67+
- Accuracy
68+
original_task: true
69+
name: Classify outcome with all info
70+
reference: Template with the task definition
71+
d5fea159-0593-4e99-bb3d-27e5ff1411f9: !Template
72+
answer_choices: significantly increased ||| significantly decreased ||| no significant
73+
difference
74+
id: d5fea159-0593-4e99-bb3d-27e5ff1411f9
8775
jinja: "{% set annotation_length = Prompts.Annotations | length %}\n\n{% set specific_sub_annotation\
8876
\ = range(0, annotation_length) | choice %}\n\n{% set sub_annotation_length\
8977
\ = Prompts.Annotations[specific_sub_annotation].Annotations | length %}\n\n\
9078
{% set sub_sub_annotation = [0] %}\n\n{% if sub_annotation_length > 0 %}\n\n\
91-
The information required to understand the outcome is below:\n\n{{Text[:1200]}}\
92-
\ \n\n{{Text[-300:]}}\n\nThe relevant annotations:\n\n{{ sub_sub_annotation.pop()\
93-
\ | replace(0, \"\") }}\n{{ sub_sub_annotation.append(range(0, sub_annotation_length)\
94-
\ | choice) | replace(None, \"\") }}\n\n{{Prompts.Annotations[specific_sub_annotation].Annotations[sub_sub_annotation[0]]}}\n\
95-
\nThe comparator is:\n\n{{Prompts.Comparator[specific_sub_annotation]}}.\n\n\
96-
The intervention is:\n\n{{Prompts.Intervention[specific_sub_annotation]}}.\n\
97-
\nThe outcome:\n\n{{Prompts.Outcome[specific_sub_annotation]}}\n\nis either\
98-
\ {{\"significantly increased\"}}, {{\"significantly decreased\"}} or {{\"no\
99-
\ significant difference\"}}. Which is it?\n\n{% endif %}\n\n|||\n\n{% if sub_annotation_length\
100-
\ > 0 %}\n\n{{Prompts.Annotations[specific_sub_annotation].Label[sub_sub_annotation[0]]}}\n\
79+
Read the following text:\n\n{{ sub_sub_annotation.pop() | replace(0, \"\") }}\n\
80+
{{ sub_sub_annotation.append(range(0, sub_annotation_length) | choice) | replace(None,\
81+
\ \"\") }}\n\n{{Text[:1200]}} \n\n{{Text[-300:]}}\n\nConsider the intervention\n\
82+
\n{{Prompts.Intervention[specific_sub_annotation]}}\n\nwith respect to the comparator\n\
83+
\n{{Prompts.Comparator[specific_sub_annotation]}}.\n\nThe outcome\n\n{{Prompts.Outcome[specific_sub_annotation]}}\n\
84+
\nis either {{\"significantly increased\"}}, {{\"significantly decreased\"}}\
85+
\ or {{\"no significant difference\"}}. Which is it?\n\n{% endif %}\n\n|||\n\
86+
\n{% if sub_annotation_length > 0 %}\n\n{{Prompts.Annotations[specific_sub_annotation].Label[sub_sub_annotation[0]]}}\n\
10187
\n{% endif %}"
10288
metadata: !TemplateMetadata
103-
choices_in_prompt: null
104-
metrics: []
89+
choices_in_prompt: true
90+
metrics:
91+
- Accuracy
10592
original_task: true
106-
name: template_with_all_info
107-
reference: Template with the task definition
108-
fbf5600f-5e70-4c15-9608-f53cec32825f: !Template
93+
name: Classify outcome
94+
reference: ''
95+
fed6ea12-8b97-491b-8741-b05d662454de: !Template
10996
answer_choices: null
110-
id: fbf5600f-5e70-4c15-9608-f53cec32825f
97+
id: fed6ea12-8b97-491b-8741-b05d662454de
11198
jinja: "{% set annotation_length = Prompts.Annotations | length %}\n\n{% set specific_sub_annotation\
11299
\ = range(0, annotation_length) | choice %}\n\n{% set sub_annotation_length\
113100
\ = Prompts.Annotations[specific_sub_annotation].Annotations | length %}\n\n\
114101
{% set sub_sub_annotation = [0] %}\n\n{% if sub_annotation_length > 0 %}\n\n\
115-
The first text snippet that is important to understand is:\n\n{{Text[:1200]}}\
116-
\ \n\nthe second text snippet is:\n\n{{Text[-300:]}}\n\nThe relevant annotations:\n\
117-
\n{{ sub_sub_annotation.pop() | replace(0, \"\") }}\n{{ sub_sub_annotation.append(range(0,\
118-
\ sub_annotation_length) | choice) | replace(None, \"\") }}\n\n{{Prompts.Annotations[specific_sub_annotation].Annotations[sub_sub_annotation[0]]}}\n\
119-
\nThe intervention is:\n\n{{Prompts.Intervention[specific_sub_annotation]}}.\n\
120-
\nThe outcome:\n\n{{Prompts.Outcome[specific_sub_annotation]}}\n\nThe comparator\
121-
\ is:\n\n{% endif %}\n\n|||\n\n{{Prompts.Comparator[specific_sub_annotation]}}."
102+
{{ sub_sub_annotation.pop() | replace(0, \"\") }}\n{{ sub_sub_annotation.append(range(0,\
103+
\ sub_annotation_length) | choice) | replace(None, \"\") }}\n\nAfter reading\
104+
\ the following text:\n\n{{Text[:1200]}} \n\n{{Text[-300:]}}\n\nThe relevant\
105+
\ annotations:\n\n{{Prompts.Annotations[specific_sub_annotation].Annotations[sub_sub_annotation[0]]}}\n\
106+
\nRegarding the comparator\n\n{{Prompts.Comparator[specific_sub_annotation]}}\n\
107+
\nand the intervention\n\n{{Prompts.Intervention[specific_sub_annotation]}},\n\
108+
\nthe outcome was\n\n{% endif %}\n\n|||\n\n{{Prompts.Outcome[specific_sub_annotation]}}"
122109
metadata: !TemplateMetadata
123-
choices_in_prompt: null
124-
metrics: []
110+
choices_in_prompt: false
111+
metrics:
112+
- Accuracy
125113
original_task: false
126-
name: template_5
114+
name: Identify outcome
127115
reference: ''

0 commit comments

Comments
 (0)