Behathenimro commited on
Commit
49988f8
·
verified ·
1 Parent(s): 5a1fd3f

Upload 15 files

Browse files
LICENSE ADDED
@@ -0,0 +1,395 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Attribution 4.0 International
2
+
3
+ =======================================================================
4
+
5
+ Creative Commons Corporation ("Creative Commons") is not a law firm and
6
+ does not provide legal services or legal advice. Distribution of
7
+ Creative Commons public licenses does not create a lawyer-client or
8
+ other relationship. Creative Commons makes its licenses and related
9
+ information available on an "as-is" basis. Creative Commons gives no
10
+ warranties regarding its licenses, any material licensed under their
11
+ terms and conditions, or any related information. Creative Commons
12
+ disclaims all liability for damages resulting from their use to the
13
+ fullest extent possible.
14
+
15
+ Using Creative Commons Public Licenses
16
+
17
+ Creative Commons public licenses provide a standard set of terms and
18
+ conditions that creators and other rights holders may use to share
19
+ original works of authorship and other material subject to copyright
20
+ and certain other rights specified in the public license below. The
21
+ following considerations are for informational purposes only, are not
22
+ exhaustive, and do not form part of our licenses.
23
+
24
+ Considerations for licensors: Our public licenses are
25
+ intended for use by those authorized to give the public
26
+ permission to use material in ways otherwise restricted by
27
+ copyright and certain other rights. Our licenses are
28
+ irrevocable. Licensors should read and understand the terms
29
+ and conditions of the license they choose before applying it.
30
+ Licensors should also secure all rights necessary before
31
+ applying our licenses so that the public can reuse the
32
+ material as expected. Licensors should clearly mark any
33
+ material not subject to the license. This includes other CC-
34
+ licensed material, or material used under an exception or
35
+ limitation to copyright. More considerations for licensors:
36
+ wiki.creativecommons.org/Considerations_for_licensors
37
+
38
+ Considerations for the public: By using one of our public
39
+ licenses, a licensor grants the public permission to use the
40
+ licensed material under specified terms and conditions. If
41
+ the licensor's permission is not necessary for any reason--for
42
+ example, because of any applicable exception or limitation to
43
+ copyright--then that use is not regulated by the license. Our
44
+ licenses grant only permissions under copyright and certain
45
+ other rights that a licensor has authority to grant. Use of
46
+ the licensed material may still be restricted for other
47
+ reasons, including because others have copyright or other
48
+ rights in the material. A licensor may make special requests,
49
+ such as asking that all changes be marked or described.
50
+ Although not required by our licenses, you are encouraged to
51
+ respect those requests where reasonable. More_considerations
52
+ for the public:
53
+ wiki.creativecommons.org/Considerations_for_licensees
54
+
55
+ =======================================================================
56
+
57
+ Creative Commons Attribution 4.0 International Public License
58
+
59
+ By exercising the Licensed Rights (defined below), You accept and agree
60
+ to be bound by the terms and conditions of this Creative Commons
61
+ Attribution 4.0 International Public License ("Public License"). To the
62
+ extent this Public License may be interpreted as a contract, You are
63
+ granted the Licensed Rights in consideration of Your acceptance of
64
+ these terms and conditions, and the Licensor grants You such rights in
65
+ consideration of benefits the Licensor receives from making the
66
+ Licensed Material available under these terms and conditions.
67
+
68
+
69
+ Section 1 -- Definitions.
70
+
71
+ a. Adapted Material means material subject to Copyright and Similar
72
+ Rights that is derived from or based upon the Licensed Material
73
+ and in which the Licensed Material is translated, altered,
74
+ arranged, transformed, or otherwise modified in a manner requiring
75
+ permission under the Copyright and Similar Rights held by the
76
+ Licensor. For purposes of this Public License, where the Licensed
77
+ Material is a musical work, performance, or sound recording,
78
+ Adapted Material is always produced where the Licensed Material is
79
+ synched in timed relation with a moving image.
80
+
81
+ b. Adapter's License means the license You apply to Your Copyright
82
+ and Similar Rights in Your contributions to Adapted Material in
83
+ accordance with the terms and conditions of this Public License.
84
+
85
+ c. Copyright and Similar Rights means copyright and/or similar rights
86
+ closely related to copyright including, without limitation,
87
+ performance, broadcast, sound recording, and Sui Generis Database
88
+ Rights, without regard to how the rights are labeled or
89
+ categorized. For purposes of this Public License, the rights
90
+ specified in Section 2(b)(1)-(2) are not Copyright and Similar
91
+ Rights.
92
+
93
+ d. Effective Technological Measures means those measures that, in the
94
+ absence of proper authority, may not be circumvented under laws
95
+ fulfilling obligations under Article 11 of the WIPO Copyright
96
+ Treaty adopted on December 20, 1996, and/or similar international
97
+ agreements.
98
+
99
+ e. Exceptions and Limitations means fair use, fair dealing, and/or
100
+ any other exception or limitation to Copyright and Similar Rights
101
+ that applies to Your use of the Licensed Material.
102
+
103
+ f. Licensed Material means the artistic or literary work, database,
104
+ or other material to which the Licensor applied this Public
105
+ License.
106
+
107
+ g. Licensed Rights means the rights granted to You subject to the
108
+ terms and conditions of this Public License, which are limited to
109
+ all Copyright and Similar Rights that apply to Your use of the
110
+ Licensed Material and that the Licensor has authority to license.
111
+
112
+ h. Licensor means the individual(s) or entity(ies) granting rights
113
+ under this Public License.
114
+
115
+ i. Share means to provide material to the public by any means or
116
+ process that requires permission under the Licensed Rights, such
117
+ as reproduction, public display, public performance, distribution,
118
+ dissemination, communication, or importation, and to make material
119
+ available to the public including in ways that members of the
120
+ public may access the material from a place and at a time
121
+ individually chosen by them.
122
+
123
+ j. Sui Generis Database Rights means rights other than copyright
124
+ resulting from Directive 96/9/EC of the European Parliament and of
125
+ the Council of 11 March 1996 on the legal protection of databases,
126
+ as amended and/or succeeded, as well as other essentially
127
+ equivalent rights anywhere in the world.
128
+
129
+ k. You means the individual or entity exercising the Licensed Rights
130
+ under this Public License. Your has a corresponding meaning.
131
+
132
+
133
+ Section 2 -- Scope.
134
+
135
+ a. License grant.
136
+
137
+ 1. Subject to the terms and conditions of this Public License,
138
+ the Licensor hereby grants You a worldwide, royalty-free,
139
+ non-sublicensable, non-exclusive, irrevocable license to
140
+ exercise the Licensed Rights in the Licensed Material to:
141
+
142
+ a. reproduce and Share the Licensed Material, in whole or
143
+ in part; and
144
+
145
+ b. produce, reproduce, and Share Adapted Material.
146
+
147
+ 2. Exceptions and Limitations. For the avoidance of doubt, where
148
+ Exceptions and Limitations apply to Your use, this Public
149
+ License does not apply, and You do not need to comply with
150
+ its terms and conditions.
151
+
152
+ 3. Term. The term of this Public License is specified in Section
153
+ 6(a).
154
+
155
+ 4. Media and formats; technical modifications allowed. The
156
+ Licensor authorizes You to exercise the Licensed Rights in
157
+ all media and formats whether now known or hereafter created,
158
+ and to make technical modifications necessary to do so. The
159
+ Licensor waives and/or agrees not to assert any right or
160
+ authority to forbid You from making technical modifications
161
+ necessary to exercise the Licensed Rights, including
162
+ technical modifications necessary to circumvent Effective
163
+ Technological Measures. For purposes of this Public License,
164
+ simply making modifications authorized by this Section 2(a)
165
+ (4) never produces Adapted Material.
166
+
167
+ 5. Downstream recipients.
168
+
169
+ a. Offer from the Licensor -- Licensed Material. Every
170
+ recipient of the Licensed Material automatically
171
+ receives an offer from the Licensor to exercise the
172
+ Licensed Rights under the terms and conditions of this
173
+ Public License.
174
+
175
+ b. No downstream restrictions. You may not offer or impose
176
+ any additional or different terms or conditions on, or
177
+ apply any Effective Technological Measures to, the
178
+ Licensed Material if doing so restricts exercise of the
179
+ Licensed Rights by any recipient of the Licensed
180
+ Material.
181
+
182
+ 6. No endorsement. Nothing in this Public License constitutes or
183
+ may be construed as permission to assert or imply that You
184
+ are, or that Your use of the Licensed Material is, connected
185
+ with, or sponsored, endorsed, or granted official status by,
186
+ the Licensor or others designated to receive attribution as
187
+ provided in Section 3(a)(1)(A)(i).
188
+
189
+ b. Other rights.
190
+
191
+ 1. Moral rights, such as the right of integrity, are not
192
+ licensed under this Public License, nor are publicity,
193
+ privacy, and/or other similar personality rights; however, to
194
+ the extent possible, the Licensor waives and/or agrees not to
195
+ assert any such rights held by the Licensor to the limited
196
+ extent necessary to allow You to exercise the Licensed
197
+ Rights, but not otherwise.
198
+
199
+ 2. Patent and trademark rights are not licensed under this
200
+ Public License.
201
+
202
+ 3. To the extent possible, the Licensor waives any right to
203
+ collect royalties from You for the exercise of the Licensed
204
+ Rights, whether directly or through a collecting society
205
+ under any voluntary or waivable statutory or compulsory
206
+ licensing scheme. In all other cases the Licensor expressly
207
+ reserves any right to collect such royalties.
208
+
209
+
210
+ Section 3 -- License Conditions.
211
+
212
+ Your exercise of the Licensed Rights is expressly made subject to the
213
+ following conditions.
214
+
215
+ a. Attribution.
216
+
217
+ 1. If You Share the Licensed Material (including in modified
218
+ form), You must:
219
+
220
+ a. retain the following if it is supplied by the Licensor
221
+ with the Licensed Material:
222
+
223
+ i. identification of the creator(s) of the Licensed
224
+ Material and any others designated to receive
225
+ attribution, in any reasonable manner requested by
226
+ the Licensor (including by pseudonym if
227
+ designated);
228
+
229
+ ii. a copyright notice;
230
+
231
+ iii. a notice that refers to this Public License;
232
+
233
+ iv. a notice that refers to the disclaimer of
234
+ warranties;
235
+
236
+ v. a URI or hyperlink to the Licensed Material to the
237
+ extent reasonably practicable;
238
+
239
+ b. indicate if You modified the Licensed Material and
240
+ retain an indication of any previous modifications; and
241
+
242
+ c. indicate the Licensed Material is licensed under this
243
+ Public License, and include the text of, or the URI or
244
+ hyperlink to, this Public License.
245
+
246
+ 2. You may satisfy the conditions in Section 3(a)(1) in any
247
+ reasonable manner based on the medium, means, and context in
248
+ which You Share the Licensed Material. For example, it may be
249
+ reasonable to satisfy the conditions by providing a URI or
250
+ hyperlink to a resource that includes the required
251
+ information.
252
+
253
+ 3. If requested by the Licensor, You must remove any of the
254
+ information required by Section 3(a)(1)(A) to the extent
255
+ reasonably practicable.
256
+
257
+ 4. If You Share Adapted Material You produce, the Adapter's
258
+ License You apply must not prevent recipients of the Adapted
259
+ Material from complying with this Public License.
260
+
261
+
262
+ Section 4 -- Sui Generis Database Rights.
263
+
264
+ Where the Licensed Rights include Sui Generis Database Rights that
265
+ apply to Your use of the Licensed Material:
266
+
267
+ a. for the avoidance of doubt, Section 2(a)(1) grants You the right
268
+ to extract, reuse, reproduce, and Share all or a substantial
269
+ portion of the contents of the database;
270
+
271
+ b. if You include all or a substantial portion of the database
272
+ contents in a database in which You have Sui Generis Database
273
+ Rights, then the database in which You have Sui Generis Database
274
+ Rights (but not its individual contents) is Adapted Material; and
275
+
276
+ c. You must comply with the conditions in Section 3(a) if You Share
277
+ all or a substantial portion of the contents of the database.
278
+
279
+ For the avoidance of doubt, this Section 4 supplements and does not
280
+ replace Your obligations under this Public License where the Licensed
281
+ Rights include other Copyright and Similar Rights.
282
+
283
+
284
+ Section 5 -- Disclaimer of Warranties and Limitation of Liability.
285
+
286
+ a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
287
+ EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
288
+ AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
289
+ ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
290
+ IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
291
+ WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
292
+ PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
293
+ ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
294
+ KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
295
+ ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
296
+
297
+ b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
298
+ TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
299
+ NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
300
+ INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
301
+ COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
302
+ USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
303
+ ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
304
+ DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
305
+ IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
306
+
307
+ c. The disclaimer of warranties and limitation of liability provided
308
+ above shall be interpreted in a manner that, to the extent
309
+ possible, most closely approximates an absolute disclaimer and
310
+ waiver of all liability.
311
+
312
+
313
+ Section 6 -- Term and Termination.
314
+
315
+ a. This Public License applies for the term of the Copyright and
316
+ Similar Rights licensed here. However, if You fail to comply with
317
+ this Public License, then Your rights under this Public License
318
+ terminate automatically.
319
+
320
+ b. Where Your right to use the Licensed Material has terminated under
321
+ Section 6(a), it reinstates:
322
+
323
+ 1. automatically as of the date the violation is cured, provided
324
+ it is cured within 30 days of Your discovery of the
325
+ violation; or
326
+
327
+ 2. upon express reinstatement by the Licensor.
328
+
329
+ For the avoidance of doubt, this Section 6(b) does not affect any
330
+ right the Licensor may have to seek remedies for Your violations
331
+ of this Public License.
332
+
333
+ c. For the avoidance of doubt, the Licensor may also offer the
334
+ Licensed Material under separate terms or conditions or stop
335
+ distributing the Licensed Material at any time; however, doing so
336
+ will not terminate this Public License.
337
+
338
+ d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
339
+ License.
340
+
341
+
342
+ Section 7 -- Other Terms and Conditions.
343
+
344
+ a. The Licensor shall not be bound by any additional or different
345
+ terms or conditions communicated by You unless expressly agreed.
346
+
347
+ b. Any arrangements, understandings, or agreements regarding the
348
+ Licensed Material not stated herein are separate from and
349
+ independent of the terms and conditions of this Public License.
350
+
351
+
352
+ Section 8 -- Interpretation.
353
+
354
+ a. For the avoidance of doubt, this Public License does not, and
355
+ shall not be interpreted to, reduce, limit, restrict, or impose
356
+ conditions on any use of the Licensed Material that could lawfully
357
+ be made without permission under this Public License.
358
+
359
+ b. To the extent possible, if any provision of this Public License is
360
+ deemed unenforceable, it shall be automatically reformed to the
361
+ minimum extent necessary to make it enforceable. If the provision
362
+ cannot be reformed, it shall be severed from this Public License
363
+ without affecting the enforceability of the remaining terms and
364
+ conditions.
365
+
366
+ c. No term or condition of this Public License will be waived and no
367
+ failure to comply consented to unless expressly agreed to by the
368
+ Licensor.
369
+
370
+ d. Nothing in this Public License constitutes or may be interpreted
371
+ as a limitation upon, or waiver of, any privileges and immunities
372
+ that apply to the Licensor or You, including from the legal
373
+ processes of any jurisdiction or authority.
374
+
375
+
376
+ =======================================================================
377
+
378
+ Creative Commons is not a party to its public
379
+ licenses. Notwithstanding, Creative Commons may elect to apply one of
380
+ its public licenses to material it publishes and in those instances
381
+ will be considered the “Licensor.” The text of the Creative Commons
382
+ public licenses is dedicated to the public domain under the CC0 Public
383
+ Domain Dedication. Except for the limited purpose of indicating that
384
+ material is shared under a Creative Commons public license or as
385
+ otherwise permitted by the Creative Commons policies published at
386
+ creativecommons.org/policies, Creative Commons does not authorize the
387
+ use of the trademark "Creative Commons" or any other trademark or logo
388
+ of Creative Commons without its prior written consent including,
389
+ without limitation, in connection with any unauthorized modifications
390
+ to any of its public licenses or any other arrangements,
391
+ understandings, or agreements concerning use of licensed material. For
392
+ the avoidance of doubt, this paragraph does not form part of the
393
+ public licenses.
394
+
395
+ Creative Commons may be contacted at creativecommons.org.
data/all_craft_md.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/all_dev_good.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
environment.yml ADDED
@@ -0,0 +1,271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: mediq
2
+ channels:
3
+ - pytorch
4
+ - nvidia
5
+ - conda-forge
6
+ - defaults
7
+ dependencies:
8
+ - _libgcc_mutex=0.1=conda_forge
9
+ - _openmp_mutex=4.5=2_gnu
10
+ - aiohappyeyeballs=2.4.4=py312h06a4308_0
11
+ - aiohttp=3.10.5=py312h5eee18b_0
12
+ - aiosignal=1.2.0=pyhd3eb1b0_0
13
+ - annotated-types=0.6.0=py312h06a4308_0
14
+ - anyio=4.6.2=py312h06a4308_0
15
+ - attrs=24.2.0=py312h06a4308_0
16
+ - aws-c-auth=0.7.22=h96bc93b_2
17
+ - aws-c-cal=0.6.14=h88a6e22_1
18
+ - aws-c-common=0.9.19=h4ab18f5_0
19
+ - aws-c-compression=0.2.18=h83b837d_6
20
+ - aws-c-event-stream=0.4.2=ha47c788_12
21
+ - aws-c-http=0.8.1=h29d6fba_17
22
+ - aws-c-io=0.14.8=h21d4f22_5
23
+ - aws-c-mqtt=0.10.4=h759edc4_4
24
+ - aws-c-s3=0.5.9=h594631b_3
25
+ - aws-c-sdkutils=0.1.16=h83b837d_2
26
+ - aws-checksums=0.1.18=h83b837d_6
27
+ - aws-crt-cpp=0.26.9=he3a8b3b_0
28
+ - aws-sdk-cpp=1.11.329=hba8bd5f_3
29
+ - blas=1.0=mkl
30
+ - bottleneck=1.4.2=py312ha883a20_0
31
+ - brotli-python=1.0.9=py312h6a678d5_8
32
+ - bzip2=1.0.8=h5eee18b_6
33
+ - c-ares=1.34.4=hb9d3cd8_0
34
+ - ca-certificates=2024.12.31=h06a4308_0
35
+ - certifi=2024.12.14=py312h06a4308_0
36
+ - charset-normalizer=3.3.2=pyhd3eb1b0_0
37
+ - cuda-cudart=12.4.127=0
38
+ - cuda-cupti=12.4.127=0
39
+ - cuda-libraries=12.4.1=0
40
+ - cuda-nvrtc=12.4.127=0
41
+ - cuda-nvtx=12.4.127=0
42
+ - cuda-opencl=12.6.77=0
43
+ - cuda-runtime=12.4.1=0
44
+ - cuda-version=12.6=3
45
+ - datasets=3.2.0=pyhd8ed1ab_0
46
+ - dill=0.3.8=py312h06a4308_0
47
+ - distro=1.9.0=py312h06a4308_0
48
+ - expat=2.6.4=h6a678d5_0
49
+ - ffmpeg=4.3=hf484d3e_0
50
+ - freetype=2.12.1=h4a9f257_0
51
+ - frozenlist=1.5.0=py312h5eee18b_0
52
+ - fsspec=2024.6.1=py312h06a4308_0
53
+ - gflags=2.2.2=h6a678d5_1
54
+ - giflib=5.2.2=h5eee18b_0
55
+ - glog=0.7.1=hbabe93e_0
56
+ - gmp=6.2.1=h295c915_3
57
+ - gnutls=3.6.15=he1e5248_0
58
+ - h11=0.14.0=py312h06a4308_0
59
+ - httpcore=1.0.2=py312h06a4308_0
60
+ - httpx=0.27.0=py312h06a4308_0
61
+ - huggingface_hub=0.24.6=py312h06a4308_0
62
+ - idna=3.7=py312h06a4308_0
63
+ - intel-openmp=2023.1.0=hdb19cb5_46306
64
+ - jinja2=3.1.4=py312h06a4308_1
65
+ - jiter=0.6.1=py312hb02cf49_0
66
+ - jpeg=9e=h5eee18b_3
67
+ - krb5=1.20.1=h143b758_1
68
+ - lame=3.100=h7b6447c_0
69
+ - lcms2=2.16=hb9589c4_0
70
+ - ld_impl_linux-64=2.40=h12ee557_0
71
+ - lerc=4.0.0=h6a678d5_0
72
+ - libabseil=20240116.2=cxx17_h6a678d5_0
73
+ - libarrow=16.1.0=hcb6531f_6_cpu
74
+ - libarrow-acero=16.1.0=hac33072_6_cpu
75
+ - libarrow-dataset=16.1.0=hac33072_6_cpu
76
+ - libarrow-substrait=16.1.0=h7e0c224_6_cpu
77
+ - libbrotlicommon=1.1.0=hb9d3cd8_2
78
+ - libbrotlidec=1.1.0=hb9d3cd8_2
79
+ - libbrotlienc=1.1.0=hb9d3cd8_2
80
+ - libcrc32c=1.1.2=h6a678d5_0
81
+ - libcublas=12.4.5.8=0
82
+ - libcufft=11.2.1.3=0
83
+ - libcufile=1.11.1.6=0
84
+ - libcurand=10.3.7.77=0
85
+ - libcurl=8.9.1=h251f7ec_0
86
+ - libcusolver=11.6.1.9=0
87
+ - libcusparse=12.3.1.170=0
88
+ - libdeflate=1.22=h5eee18b_0
89
+ - libedit=3.1.20230828=h5eee18b_0
90
+ - libev=4.33=h7f8727e_1
91
+ - libevent=2.1.12=hdbd6064_1
92
+ - libexpat=2.6.4=h5888daf_0
93
+ - libffi=3.4.4=h6a678d5_1
94
+ - libgcc=14.2.0=h77fa898_1
95
+ - libgcc-ng=14.2.0=h69a702a_1
96
+ - libgomp=14.2.0=h77fa898_1
97
+ - libgoogle-cloud=2.24.0=h2736e30_0
98
+ - libgoogle-cloud-storage=2.24.0=h3d9a0c8_0
99
+ - libgrpc=1.62.2=h15f2491_0
100
+ - libiconv=1.16=h5eee18b_3
101
+ - libidn2=2.3.4=h5eee18b_0
102
+ - libjpeg-turbo=2.0.0=h9bf148f_0
103
+ - libnghttp2=1.57.0=h2d74bed_0
104
+ - libnpp=12.2.5.30=0
105
+ - libnsl=2.0.1=hd590300_0
106
+ - libnvfatbin=12.6.77=0
107
+ - libnvjitlink=12.4.127=0
108
+ - libnvjpeg=12.3.1.117=0
109
+ - libparquet=16.1.0=h6a7eafb_6_cpu
110
+ - libpng=1.6.39=h5eee18b_0
111
+ - libprotobuf=4.25.3=he621ea3_0
112
+ - libre2-11=2023.09.01=h5a48ba9_2
113
+ - libsqlite=3.46.0=hde9e2c9_0
114
+ - libssh2=1.11.1=h251f7ec_0
115
+ - libstdcxx=14.2.0=hc0a3c3a_1
116
+ - libstdcxx-ng=14.2.0=h4852527_1
117
+ - libtasn1=4.19.0=h5eee18b_0
118
+ - libthrift=0.19.0=hb90f79a_1
119
+ - libtiff=4.5.1=hffd6297_1
120
+ - libunistring=0.9.10=h27cfd23_0
121
+ - libutf8proc=2.8.0=hf23e847_1
122
+ - libuuid=2.38.1=h0b41bf4_0
123
+ - libwebp=1.3.2=h11a3e52_0
124
+ - libwebp-base=1.3.2=h5eee18b_1
125
+ - libxcrypt=4.4.36=hd590300_1
126
+ - libzlib=1.2.13=h4ab18f5_6
127
+ - llvm-openmp=14.0.6=h9e868ea_0
128
+ - lz4-c=1.9.4=h6a678d5_1
129
+ - markupsafe=2.1.3=py312h5eee18b_0
130
+ - mkl=2023.1.0=h213fc3f_46344
131
+ - mkl-service=2.4.0=py312h5eee18b_1
132
+ - mkl_fft=1.3.11=py312h5eee18b_0
133
+ - mkl_random=1.2.8=py312h526ad5a_0
134
+ - mpmath=1.3.0=py312h06a4308_0
135
+ - multidict=6.1.0=py312h5eee18b_0
136
+ - multiprocess=0.70.15=py312h06a4308_0
137
+ - ncurses=6.4=h6a678d5_0
138
+ - nettle=3.7.3=hbbd107a_1
139
+ - networkx=3.2.1=py312h06a4308_0
140
+ - numexpr=2.10.1=py312h3c60e43_0
141
+ - openai=1.57.4=pyhd8ed1ab_1
142
+ - openh264=2.1.1=h4ff587b_0
143
+ - openjpeg=2.5.2=he7f1fd0_0
144
+ - openssl=3.4.0=hb9d3cd8_0
145
+ - orc=2.0.1=h2d29ad5_0
146
+ - packaging=24.2=py312h06a4308_0
147
+ - pandas=2.2.3=py312h6a678d5_0
148
+ - pip=24.2=py312h06a4308_0
149
+ - propcache=0.2.0=py312h5eee18b_0
150
+ - pyarrow=16.1.0=py312h9cebb41_2
151
+ - pyarrow-core=16.1.0=py312h0983c49_2_cpu
152
+ - pysocks=1.7.1=py312h06a4308_0
153
+ - python=3.12.2=hab00c5b_0_cpython
154
+ - python-dateutil=2.9.0post0=py312h06a4308_2
155
+ - python-tzdata=2023.3=pyhd3eb1b0_0
156
+ - python-xxhash=2.0.2=py312h5eee18b_1
157
+ - python_abi=3.12=5_cp312
158
+ - pytorch=2.5.1=py3.12_cuda12.4_cudnn9.1.0_0
159
+ - pytorch-cuda=12.4=hc786d27_7
160
+ - pytorch-mutex=1.0=cuda
161
+ - pytz=2024.1=py312h06a4308_0
162
+ - pyyaml=6.0.2=py312h5eee18b_0
163
+ - re2=2023.09.01=h7f4b329_2
164
+ - readline=8.2=h5eee18b_0
165
+ - regex=2024.9.11=py312h5eee18b_0
166
+ - requests=2.32.3=py312h06a4308_1
167
+ - s2n=1.4.15=he19d79f_0
168
+ - safetensors=0.4.5=py312hc50d6dc_1
169
+ - setuptools=75.1.0=py312h06a4308_0
170
+ - six=1.16.0=pyhd3eb1b0_1
171
+ - snappy=1.2.1=h6a678d5_0
172
+ - sniffio=1.3.0=py312h06a4308_0
173
+ - sqlite=3.45.3=h5eee18b_0
174
+ - tbb=2021.8.0=hdb19cb5_0
175
+ - tk=8.6.14=h39e8969_0
176
+ - tokenizers=0.21.0=py312h8360d73_0
177
+ - torchaudio=2.5.1=py312_cu124
178
+ - torchtriton=3.1.0=py312
179
+ - torchvision=0.20.1=py312_cu124
180
+ - tqdm=4.66.5=py312he106c6f_0
181
+ - transformers=4.47.1=pyhd8ed1ab_0
182
+ - tzdata=2024b=h04d1e81_0
183
+ - urllib3=2.2.3=py312h06a4308_0
184
+ - vllm-nccl-cu12=2.18.1.0.4.0=pyh52da0d0_1
185
+ - wheel=0.44.0=py312h06a4308_0
186
+ - xxhash=0.8.0=h7f8727e_3
187
+ - xz=5.4.6=h5eee18b_1
188
+ - yaml=0.2.5=h7b6447c_0
189
+ - yarl=1.18.0=py312h5eee18b_0
190
+ - zlib=1.2.13=h4ab18f5_6
191
+ - zstd=1.5.6=hc292b87_0
192
+ - pip:
193
+ - aiohttp-cors==0.7.0
194
+ - airportsdata==20241001
195
+ - astor==0.8.1
196
+ - blake3==1.0.4
197
+ - cachetools==5.5.1
198
+ - click==8.1.8
199
+ - cloudpickle==3.1.1
200
+ - colorful==0.5.6
201
+ - compressed-tensors==0.8.1
202
+ - depyf==0.18.0
203
+ - diskcache==5.6.3
204
+ - distlib==0.3.9
205
+ - einops==0.8.0
206
+ - fastapi==0.115.7
207
+ - filelock==3.17.0
208
+ - gguf==0.10.0
209
+ - google-api-core==2.24.0
210
+ - google-auth==2.38.0
211
+ - googleapis-common-protos==1.66.0
212
+ - grpcio==1.70.0
213
+ - httptools==0.6.4
214
+ - importlib-metadata==8.6.1
215
+ - iniconfig==2.0.0
216
+ - interegular==0.3.3
217
+ - jsonschema==4.23.0
218
+ - jsonschema-specifications==2024.10.1
219
+ - lark==1.2.2
220
+ - lm-format-enforcer==0.10.9
221
+ - mistral-common==1.5.2
222
+ - msgpack==1.1.0
223
+ - msgspec==0.19.0
224
+ - nest-asyncio==1.6.0
225
+ - numpy==1.26.4
226
+ - nvidia-ml-py==12.570.86
227
+ - opencensus==0.11.4
228
+ - opencensus-context==0.1.3
229
+ - opencv-python-headless==4.11.0.86
230
+ - outlines==0.1.11
231
+ - outlines-core==0.1.26
232
+ - partial-json-parser==0.2.1.1.post5
233
+ - pillow==10.4.0
234
+ - platformdirs==4.3.6
235
+ - pluggy==1.5.0
236
+ - prometheus-client==0.21.1
237
+ - prometheus-fastapi-instrumentator==7.0.2
238
+ - proto-plus==1.25.0
239
+ - protobuf==5.29.3
240
+ - psutil==6.1.1
241
+ - py-cpuinfo==9.0.0
242
+ - py-spy==0.4.0
243
+ - pyasn1==0.6.1
244
+ - pyasn1-modules==0.4.1
245
+ - pybind11==2.13.6
246
+ - pycountry==24.6.1
247
+ - pydantic==2.10.6
248
+ - pydantic-core==2.27.2
249
+ - pytest==8.3.4
250
+ - python-dotenv==1.0.1
251
+ - pyzmq==26.2.0
252
+ - ray==2.41.0
253
+ - referencing==0.36.2
254
+ - rpds-py==0.22.3
255
+ - rsa==4.9
256
+ - sentencepiece==0.2.0
257
+ - smart-open==7.1.0
258
+ - starlette==0.45.3
259
+ - sympy==1.13.1
260
+ - tiktoken==0.7.0
261
+ - typing-extensions==4.12.2
262
+ - uvicorn==0.34.0
263
+ - uvloop==0.21.0
264
+ - virtualenv==20.29.1
265
+ - vllm==0.6.6.post1
266
+ - watchfiles==1.0.4
267
+ - websockets==14.2
268
+ - wrapt==1.17.2
269
+ - xformers==0.0.28.post3
270
+ - xgrammar==0.1.11
271
+ - zipp==3.21.0
readme.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MediQ: Question-Asking LLMs for Adaptive and Reliable Clinical Reasoning
2
+
3
+ ## [[paper](https://arxiv.org/abs/2406.00922)] [[website](https://stellalisy.com/projects/mediQ/)] [[data](https://github.com/stellali7/mediQ/tree/main/data)]
4
+
5
+ ## Overview
6
+ This benchmark system simulates an interactive conversation between a patient and an expert. The system evaluates how well participants' expert modules can handle realistic patient queries by either asking relevant questions or making final decisions based on the conversation history.
7
+
8
+ ## Installation
9
+ Clone this repository to your local machine using the following command:
10
+ ```
11
+ git clone https://github.com/stellali7/MediQ.git
12
+ ```
13
+
14
+ Navigate into the project directory:
15
+ ```
16
+ cd MediQ
17
+ ```
18
+
19
+ Create a new conda environment with necessary packages (note: you need to be on a GPU node to install PyTorch with CUDA):
20
+ ```
21
+ conda env create -f environment.yml
22
+ ```
23
+
24
+
25
+ ## Project Structure
26
+ - `benchmark.py`: Main script to run the benchmark.
27
+ - `patient.py`: Defines the `Patient` class that simulates patient behavior.
28
+ - `expert.py`: Contains the `Expert` class which participants will extend to implement their response strategies.
29
+ - `args.py`: Handles command-line arguments for the benchmark system.
30
+
31
+ ## Configuration
32
+ Before running the benchmark, configure the necessary parameters in `args.py`:
33
+ - `--expert_module`: The file name (without `.py`) where the Expert class is implemented (e.g. expert if your Expert class definition is in `expert.py`)
34
+ - `--expert_class`: The name of the Expert class to be evaluated, this should be defined in the file `[expert_module].py` (e.g. RandomExpert)
35
+ - `--patient_module`: The file name (without `.py`) where the Patient class is implemented (e.g. patient if your Patient class definition is in `patient.py`)
36
+ - `--patient_class`: The name of the Patient class to use for the benchmark, this should be defined in the file `[patient_module].py` (e.g. RandomPatient)
37
+ - `--data_dir`: Directory containing the development data files.
38
+ - `--dev_filename`: Filename for development data.
39
+ - `--log_filename`: Filename for logging general benchmark information.
40
+ - `--history_log_filename`: Filename for logging detailed interaction history.
41
+ - `--message_log_filename`: Filename for logging messages.
42
+ - `--output_filepath`: Path where the output JSONL files will be saved.
43
+
44
+ ## Running the Benchmark
45
+ NOTE: if you choose to use an OpenAI model to power the benchmark, you need to put the API key in `src/keys.py`.
46
+
47
+ To test run the benchmark, use the following command (note: the Patient system is provided as described in the paper, the Expert system is a skeleton code. For a fast test run, use `--patient_variant random` to not call use any actual model or API):
48
+ ```
49
+ python mediQ_benchmark.py --expert_module expert --expert_class FixedExpert \
50
+ --patient_module patient --patient_class RandomPatient \
51
+ --data_dir ../data --dev_filename all_dev_good.jsonl \
52
+ --output_filename out.jsonl --max_questions 10
53
+ ```
54
+
55
+ Ensure to replace the placeholder values with actual parameters relevant to your setup.
56
+
57
+ ## Try out your own Expert system
58
+ You can easily create their own `Expert` class within a module specified by `--expert_module`, or old a different model by specifying the model path in `--expert_model`. The class should correctly implement the `respond` method to interact with the `Patient` instances based on their states (the Patient can be customized as well). The response should either be a continuation question or a final decision. Your implementation will be tested against a variety of patient scenarios provided in the development dataset.
59
+
60
+ ## How to Cite
61
+ ```
62
+ @inproceedings{li2024mediq,
63
+ title={MediQ: Question-Asking LLMs and a Benchmark for Reliable Interactive Clinical Reasoning},
64
+ author={Li, Shuyue Stella and Balachandran, Vidhisha and Feng, Shangbin and Ilgen, Jonathan S and Pierson, Emma and Koh, Pang Wei and Tsvetkov, Yulia},
65
+ journal={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
66
+ year={2024}
67
+ }
68
+ ```
69
+
70
+ Shield: [![CC BY 4.0][cc-by-shield]][cc-by]
71
+
72
+ This work is licensed under a
73
+ [Creative Commons Attribution 4.0 International License][cc-by].
74
+
75
+ [![CC BY 4.0][cc-by-image]][cc-by]
76
+
77
+ [cc-by]: http://creativecommons.org/licenses/by/4.0/
78
+ [cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png
79
+ [cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
src/args.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import os
3
+
4
+ def get_args():
5
+ parser = argparse.ArgumentParser(description="Run the benchmark with specified configurations.")
6
+ parser.add_argument('--expert_module', type=str, default='expert', help='file name where the expert class is implemented.')
7
+ parser.add_argument('--expert_class', type=str, required=True, help='Expert class name to use for the benchmark.')
8
+ parser.add_argument('--expert_model', type=str, default='meta-llama/Llama-3.1-8B-Instruct', help='Expert model name to use for the benchmark, can be a local model or a Huggingface model.')
9
+ parser.add_argument('--expert_model_question_generator', type=str, default='meta-llama/Llama-3.1-8B-Instruct', help='You can set a separate model for the follow-up question generator, can be a local model or a Huggingface model.')
10
+
11
+ parser.add_argument('--patient_module', type=str, default='patient', help='file name where the patient class is implemented.')
12
+ parser.add_argument('--patient_class', type=str, required=True, help='Patient class name to use for the benchmark.')
13
+ parser.add_argument('--patient_model', type=str, default='meta-llama/Llama-3.1-8B-Instruct', help='Patient model name to use for the benchmark, can be a local model or a Huggingface model.')
14
+
15
+ parser.add_argument('--data_dir', type=str, required=True, help='Directory containing the development data files.')
16
+ parser.add_argument('--dev_filename', type=str, required=True, help='Filename for development data.')
17
+
18
+ parser.add_argument('--output_filename', type=str, default="results.jsonl")
19
+
20
+ parser.add_argument("--max_questions", type=int, default=30)
21
+
22
+ parser.add_argument('--log_filename', type=str, default='log.log', help='Filename for logging general benchmark results.')
23
+ parser.add_argument('--history_log_filename', type=str, default=None, help='Filename for logging interaction history, will not log if None.')
24
+ parser.add_argument('--detail_log_filename', type=str, default=None, help='Filename for logging detailed prompts and response on abstention, will not log if None.')
25
+ parser.add_argument('--message_log_filename', type=str, default=None, help='Filename for logging messages passed into API calls, will not log if None.')
26
+
27
+ parser.add_argument('--rationale_generation', action='store_true', help='Generate rationales for the choices.')
28
+ parser.add_argument('--self_consistency', type=int, default=1, help='Number of times to run the self-consistency check.')
29
+ parser.add_argument('--abstain_threshold', type=float, default=0.8, help='Threshold for abstaining from making a choice.')
30
+ parser.add_argument('--independent_modules', action='store_true', help='Cognitive modules within the Expert dont see previous convo.')
31
+
32
+ parser.add_argument('--use_vllm', action='store_true', help='Use the VLLM model for generating responses.')
33
+ parser.add_argument('--use_api', type=str, default=None, help='Use an API for generating responses.', choices=['openai']) # compatible with the OpenAI API for now
34
+ parser.add_argument('--temperature', type=float, default=0.6, help='Temperature for sampling from the model.')
35
+ parser.add_argument('--top_p', type=float, default=0.9, help='Top p value for nucleus sampling.')
36
+ parser.add_argument('--max_tokens', type=int, default=256, help='Maximum number of tokens to generate.')
37
+ parser.add_argument('--top_logprobs', type=int, default=0, help='Number of top logprobs to return.')
38
+ parser.add_argument('--api_account', type=str, default="mediQ", help='API keys are stored in keys.py, api_account is the name of the key.')
39
+
40
+ args = parser.parse_args()
41
+
42
+ if args.log_filename: os.makedirs(os.path.dirname(args.log_filename), exist_ok=True)
43
+ if args.history_log_filename: os.makedirs(os.path.dirname(args.history_log_filename), exist_ok=True)
44
+ if args.detail_log_filename: os.makedirs(os.path.dirname(args.detail_log_filename), exist_ok=True)
45
+ if args.message_log_filename: os.makedirs(os.path.dirname(args.message_log_filename), exist_ok=True)
46
+ return args
src/evaluate.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import numpy as np
3
+ import torch
4
+ from sentence_transformers import SentenceTransformer, util
5
+
6
+ # from mydifflib import get_close_matches
7
+
8
+ device = torch.device("cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() and torch.backends.mps.is_built() else "cpu")
9
+ print(f"Device: {device}")
10
+
11
+ emb_model = SentenceTransformer('stsb-roberta-large', device=device)
12
+
13
+ def eval_sample(id, sample, choice, scores, questions, answers, answer_dne, temp_choice_list, threshold=0.85):
14
+ questions_emb = emb_model.encode(questions)
15
+ facts_emb = emb_model.encode(sample["facts"])
16
+ facts_count = [0]*len(sample["facts"])
17
+ answers_expanded, answers_count = [], []
18
+
19
+ for answer in answers:
20
+ answer = [a for a in answer.split('. ') if not a.isnumeric()] # split the answer into atomic facts
21
+ answers_expanded.extend(answer)
22
+ answers_count.append(len(answer))
23
+ answers_emb = emb_model.encode(answers_expanded)
24
+
25
+ output_dict = {
26
+ "id": id,
27
+ "info": sample,
28
+ "interactive_system": {
29
+ "choice": choice,
30
+ "confidence_scores": scores,
31
+ "questions": questions,
32
+ "answers": answers,
33
+ "answer_dne": answer_dne,
34
+ "num_questions": len(questions),
35
+ "intermediate_choices": temp_choice_list,
36
+ },
37
+ "eval": {
38
+ "repeat_question_score": [],
39
+ "repeat_answer_score": [],
40
+ "relevancy_score": [],
41
+ "delta_confidence_score": [],
42
+ "specificity_score": []
43
+ }
44
+ }
45
+
46
+ # Example placeholder for evaluation metrics computation
47
+ for i in range(len(questions)):
48
+ output_dict["eval"]["repeat_question_score"].append(np.random.random()) # Placeholder
49
+ output_dict["eval"]["repeat_answer_score"].append(np.random.random()) # Placeholder
50
+ output_dict["eval"]["relevancy_score"].append(np.random.random()) # Placeholder
51
+ output_dict["eval"]["delta_confidence_score"].append(np.random.random()) # Placeholder
52
+ output_dict["eval"]["specificity_score"].append(np.random.random()) # Placeholder
53
+
54
+ return output_dict
55
+
56
+ # Other functions should be similarly reviewed and implemented
src/expert.py ADDED
@@ -0,0 +1,225 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import random
2
+ import expert_functions
3
+
4
+ class Expert:
5
+ """
6
+ Expert system skeleton
7
+ """
8
+ def __init__(self, args, inquiry, options):
9
+ # Initialize the expert with necessary parameters and the initial context or inquiry
10
+ self.args = args
11
+ self.inquiry = inquiry
12
+ self.options = options
13
+
14
+ def respond(self, patient_state):
15
+ # Decision-making based on the initial information, history of interactions, current inquiry, and options
16
+ raise NotImplementedError
17
+
18
+ def ask_question(self, patient_state, prev_messages):
19
+ # Generate a question based on the current patient state
20
+ kwargs = {
21
+ "patient_state": patient_state,
22
+ "inquiry": self.inquiry,
23
+ "options_dict": self.options,
24
+ "messages": prev_messages,
25
+ "independent_modules": self.args.independent_modules,
26
+ "model_name": self.args.expert_model_question_generator,
27
+ "use_vllm": self.args.use_vllm,
28
+ "use_api": self.args.use_api,
29
+ "temperature": self.args.temperature,
30
+ "max_tokens": self.args.max_tokens,
31
+ "top_p": self.args.top_p,
32
+ "top_logprobs": self.args.top_logprobs,
33
+ "api_account": self.args.api_account
34
+ }
35
+ return expert_functions.question_generation(**kwargs)
36
+
37
+ def get_abstain_kwargs(self, patient_state):
38
+ kwargs = {
39
+ "max_depth": self.args.max_questions,
40
+ "patient_state": patient_state,
41
+ "rationale_generation": self.args.rationale_generation,
42
+ "inquiry": self.inquiry,
43
+ "options_dict": self.options,
44
+ "abstain_threshold": self.args.abstain_threshold,
45
+ "self_consistency": self.args.self_consistency,
46
+ "model_name": self.args.expert_model,
47
+ "use_vllm": self.args.use_vllm,
48
+ "use_api": self.args.use_api,
49
+ "temperature": self.args.temperature,
50
+ "max_tokens": self.args.max_tokens,
51
+ "top_p": self.args.top_p,
52
+ "top_logprobs": self.args.top_logprobs,
53
+ "api_account": self.args.api_account
54
+ }
55
+ return kwargs
56
+
57
+
58
+ class RandomExpert(Expert):
59
+ """
60
+ Below is an example Expert system that randomly asks a question or makes a choice based on the current patient state.
61
+ This should be replaced with a more sophisticated expert system that can make informed decisions based on the patient state.
62
+ """
63
+
64
+ def respond(self, patient_state):
65
+ # Decision-making based on the initial information, history of interactions, current inquiry, and options
66
+ initial_info = patient_state['initial_info'] # not use because it's random
67
+ history = patient_state['interaction_history'] # not use because it's random
68
+
69
+ # randomly decide to ask a question or make a choice
70
+ abstain = random.random() < 0.5
71
+ toy_question = "Can you describe your symptoms more?"
72
+ toy_decision = self.choice(patient_state)
73
+ conf_score = random.random()/2 if abstain else random.random()
74
+
75
+ return {
76
+ "type": "question" if abstain else "choice",
77
+ "question": toy_question,
78
+ "letter_choice": toy_decision,
79
+ "confidence": conf_score, # Optional confidence score
80
+ "urgent": True, # Example of another optional flag
81
+ "additional_info": "Check for any recent changes." # Any other optional data
82
+ }
83
+
84
+ def choice(self, patient_state):
85
+ # Generate a choice or intermediate decision based on the current patient state
86
+ # randomly choose an option
87
+ return random.choice(list(self.options.keys()))
88
+
89
+
90
+ class BasicExpert(Expert):
91
+ def respond(self, patient_state):
92
+ kwargs = self.get_abstain_kwargs(patient_state)
93
+ abstain_response_dict = expert_functions.implicit_abstention_decision(**kwargs)
94
+ return {
95
+ "type": "question" if abstain_response_dict["abstain"] else "choice",
96
+ "question": abstain_response_dict["atomic_question"],
97
+ "letter_choice": abstain_response_dict["letter_choice"],
98
+ "confidence": abstain_response_dict["confidence"],
99
+ "usage": abstain_response_dict["usage"]
100
+ }
101
+
102
+
103
+ class FixedExpert(Expert):
104
+ def respond(self, patient_state):
105
+ # Decision-making based on the initial information, history of interactions, current inquiry, and options
106
+ kwargs = self.get_abstain_kwargs(patient_state)
107
+ abstain_response_dict = expert_functions.fixed_abstention_decision(**kwargs)
108
+ if abstain_response_dict["abstain"] == False:
109
+ return {
110
+ "type": "choice",
111
+ "letter_choice": abstain_response_dict["letter_choice"],
112
+ "confidence": abstain_response_dict["confidence"],
113
+ "usage": abstain_response_dict["usage"]
114
+ }
115
+
116
+ question_response_dict = self.ask_question(patient_state, abstain_response_dict["messages"])
117
+ abstain_response_dict["usage"]["input_tokens"] += question_response_dict["usage"]["input_tokens"]
118
+ abstain_response_dict["usage"]["output_tokens"] += question_response_dict["usage"]["output_tokens"]
119
+ return {
120
+ "type": "question",
121
+ "question": question_response_dict["atomic_question"],
122
+ "letter_choice": abstain_response_dict["letter_choice"],
123
+ "confidence": abstain_response_dict["confidence"],
124
+ "usage": abstain_response_dict["usage"]
125
+ }
126
+
127
+
128
+ class BinaryExpert(Expert):
129
+ def respond(self, patient_state):
130
+ # Decision-making based on the initial information, history of interactions, current inquiry, and options
131
+ kwargs = self.get_abstain_kwargs(patient_state)
132
+ abstain_response_dict = expert_functions.binary_abstention_decision(**kwargs)
133
+ if abstain_response_dict["abstain"] == False:
134
+ return {
135
+ "type": "choice",
136
+ "letter_choice": abstain_response_dict["letter_choice"],
137
+ "confidence": abstain_response_dict["confidence"],
138
+ "usage": abstain_response_dict["usage"]
139
+ }
140
+
141
+ question_response_dict = self.ask_question(patient_state, abstain_response_dict["messages"])
142
+ abstain_response_dict["usage"]["input_tokens"] += question_response_dict["usage"]["input_tokens"]
143
+ abstain_response_dict["usage"]["output_tokens"] += question_response_dict["usage"]["output_tokens"]
144
+ return {
145
+ "type": "question",
146
+ "question": question_response_dict["atomic_question"],
147
+ "letter_choice": abstain_response_dict["letter_choice"],
148
+ "confidence": abstain_response_dict["confidence"],
149
+ "usage": abstain_response_dict["usage"]
150
+ }
151
+
152
+
153
+ class NumericalExpert(Expert):
154
+ def respond(self, patient_state):
155
+ # Decision-making based on the initial information, history of interactions, current inquiry, and options
156
+ kwargs = self.get_abstain_kwargs(patient_state)
157
+ abstain_response_dict = expert_functions.numerical_abstention_decision(**kwargs)
158
+ if abstain_response_dict["abstain"] == False:
159
+ return {
160
+ "type": "choice",
161
+ "letter_choice": abstain_response_dict["letter_choice"],
162
+ "confidence": abstain_response_dict["confidence"],
163
+ "usage": abstain_response_dict["usage"]
164
+ }
165
+
166
+ question_response_dict = self.ask_question(patient_state, abstain_response_dict["messages"])
167
+ abstain_response_dict["usage"]["input_tokens"] += question_response_dict["usage"]["input_tokens"]
168
+ abstain_response_dict["usage"]["output_tokens"] += question_response_dict["usage"]["output_tokens"]
169
+ return {
170
+ "type": "question",
171
+ "question": question_response_dict["atomic_question"],
172
+ "letter_choice": abstain_response_dict["letter_choice"],
173
+ "confidence": abstain_response_dict["confidence"],
174
+ "usage": abstain_response_dict["usage"]
175
+ }
176
+
177
+
178
+ class NumericalCutOffExpert(Expert):
179
+ def respond(self, patient_state):
180
+ # Decision-making based on the initial information, history of interactions, current inquiry, and options
181
+ kwargs = self.get_abstain_kwargs(patient_state)
182
+ abstain_response_dict = expert_functions.numcutoff_abstention_decision(**kwargs)
183
+ if abstain_response_dict["abstain"] == False:
184
+ return {
185
+ "type": "choice",
186
+ "letter_choice": abstain_response_dict["letter_choice"],
187
+ "confidence": abstain_response_dict["confidence"],
188
+ "usage": abstain_response_dict["usage"]
189
+ }
190
+
191
+ question_response_dict = self.ask_question(patient_state, abstain_response_dict["messages"])
192
+ abstain_response_dict["usage"]["input_tokens"] += question_response_dict["usage"]["input_tokens"]
193
+ abstain_response_dict["usage"]["output_tokens"] += question_response_dict["usage"]["output_tokens"]
194
+ return {
195
+ "type": "question",
196
+ "question": question_response_dict["atomic_question"],
197
+ "letter_choice": abstain_response_dict["letter_choice"],
198
+ "confidence": abstain_response_dict["confidence"],
199
+ "usage": abstain_response_dict["usage"]
200
+ }
201
+
202
+
203
+ class ScaleExpert(Expert):
204
+ def respond(self, patient_state):
205
+ # Decision-making based on the initial information, history of interactions, current inquiry, and options
206
+ kwargs = self.get_abstain_kwargs(patient_state)
207
+ abstain_response_dict = expert_functions.scale_abstention_decision(**kwargs)
208
+ if abstain_response_dict["abstain"] == False:
209
+ return {
210
+ "type": "choice",
211
+ "letter_choice": abstain_response_dict["letter_choice"],
212
+ "confidence": abstain_response_dict["confidence"],
213
+ "usage": abstain_response_dict["usage"]
214
+ }
215
+
216
+ question_response_dict = self.ask_question(patient_state, abstain_response_dict["messages"])
217
+ abstain_response_dict["usage"]["input_tokens"] += question_response_dict["usage"]["input_tokens"]
218
+ abstain_response_dict["usage"]["output_tokens"] += question_response_dict["usage"]["output_tokens"]
219
+ return {
220
+ "type": "question",
221
+ "question": question_response_dict["atomic_question"],
222
+ "letter_choice": abstain_response_dict["letter_choice"],
223
+ "confidence": abstain_response_dict["confidence"],
224
+ "usage": abstain_response_dict["usage"]
225
+ }
src/expert_basics.py ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ import random
3
+ import re
4
+ from helper import get_response
5
+
6
+
7
+ def log_info(message, logger_name="detail_logger", print_to_std=False, type="info"):
8
+ # if type(logger) == str and logger in logging.getLogger().manager.loggerDict:
9
+ logger = logging.getLogger(logger_name)
10
+ if type == "error": return logger.error(message)
11
+ if logger: logger.info(message)
12
+ if print_to_std: print(message + "\n")
13
+
14
+
15
+ def expert_response_choice_or_question(messages, options_dict, self_consistency=1, **kwargs):
16
+ """
17
+ Implicit Abstain
18
+ """
19
+ log_info(f"++++++++++++++++++++ Start of Implicit Abstention [expert_basics.py:expert_response_choice_or_question()] ++++++++++++++++++++")
20
+ log_info(f"[<IMPLICIT ABSTAIN PROMPT>] [len(messages)={len(messages)}] (messages[-1]):\n{messages[-1]['content']}")
21
+ answers, questions, response_texts = [], [], {}
22
+ total_tokens = {"input_tokens": 0, "output_tokens": 0}
23
+ choice_logprobs = []
24
+ for i in range(self_consistency):
25
+ log_info(f"-------------------- Self-Consistency Iteration {i+1} --------------------")
26
+ response_text, log_probs, num_tokens = get_response(messages, **kwargs)
27
+ total_tokens["input_tokens"] += num_tokens["input_tokens"]
28
+ total_tokens["output_tokens"] += num_tokens["output_tokens"]
29
+ if not response_text:
30
+ log_info("[<IMPLICIT ABSTAIN LM RES>]: " + "No response --> Re-prompt")
31
+ continue
32
+ log_info("[<IMPLICIT ABSTAIN LM RES>]: " + response_text)
33
+ response_text = response_text.replace("Confident --> Answer: ", "").replace("Not confident --> Doctor Question: ", "")
34
+
35
+ if "?" not in response_text:
36
+ letter_choice = parse_choice(response_text, options_dict)
37
+ if letter_choice:
38
+ log_info("[<IMPLICIT ABSTAIN PARSED>]: " + letter_choice)
39
+ answers.append(letter_choice)
40
+ response_texts[letter_choice] = response_text
41
+ choice_logprobs.append(log_probs)
42
+ else:
43
+ # not a choice, parse as question
44
+ atomic_question = parse_atomic_question(response_text)
45
+ if atomic_question:
46
+ log_info("[<IMPLICIT ABSTAIN PARSED>]: " + atomic_question)
47
+ questions.append(atomic_question)
48
+ response_texts[atomic_question] = response_text
49
+
50
+ else:
51
+ log_info("[<IMPLICIT ABSTAIN PARSED>]: " + "FAILED TO PARSE --> Re-prompt")
52
+
53
+ if len(answers) + len(questions) == 0:
54
+ log_info("[<IMPLICIT ABSTAIN SC-PARSED>]: " + "No response.")
55
+ return "No response.", None, None, 0.0, {}, total_tokens
56
+
57
+ conf_score = len(answers) / (len(answers) + len(questions))
58
+ if len(answers) > len(questions):
59
+ final_answer = max(set(answers), key = answers.count)
60
+ response_text = response_texts[final_answer]
61
+ top_logprobs = choice_logprobs[answers.index(final_answer)]
62
+ atomic_question = None
63
+ else:
64
+ final_answer = None
65
+ rand_id = random.choice(range(len(questions)))
66
+ atomic_question = questions[rand_id]
67
+ response_text = response_texts[atomic_question]
68
+ top_logprobs = None
69
+ log_info(f"[<IMPLICIT ABSTAIN RETURN>]: atomic_question: {atomic_question}, final_answer: {final_answer}, conf_score: {conf_score} ([{len(answers)} : {len(questions)}])")
70
+ return response_text, atomic_question, final_answer, conf_score, top_logprobs, total_tokens
71
+
72
+
73
+
74
+ def expert_response_yes_no(messages, self_consistency=1, **kwargs):
75
+ """
76
+ Binary Abstain
77
+ """
78
+ log_info(f"++++++++++++++++++++ Start of YES/NO Decision [expert_basics.py:expert_response_yes_no()] ++++++++++++++++++++")
79
+ log_info(f"[<YES/NO PROMPT>] [len(messages)={len(messages)}] (messages[-1]):\n{messages[-1]['content']}")
80
+
81
+ yes_no_responses, log_probs_list, response_texts = [], [], {}
82
+ total_tokens = {"input_tokens": 0, "output_tokens": 0}
83
+ for i in range(self_consistency):
84
+ log_info(f"-------------------- Self-Consistency Iteration {i+1} --------------------")
85
+ response_text, log_probs, num_tokens = get_response(messages, **kwargs)
86
+ total_tokens["input_tokens"] += num_tokens["input_tokens"]
87
+ total_tokens["output_tokens"] += num_tokens["output_tokens"]
88
+ if not response_text:
89
+ log_info("[<YES/NO LM RES>]: " + "No response.")
90
+ log_info("[<YES/NO LM RES>]: " + response_text)
91
+ log_probs_list.append(log_probs)
92
+
93
+ yes_choice = parse_yes_no(response_text)
94
+ log_info("[<YES/NO PARSED>]: " + yes_choice)
95
+ yes_no_responses.append(yes_choice)
96
+ response_texts[yes_choice] = response_text
97
+
98
+ if yes_no_responses.count("YES") > yes_no_responses.count("NO"):
99
+ yes_choice = "YES"
100
+ log_probs = log_probs_list[yes_no_responses.index("YES")]
101
+ else:
102
+ yes_choice = "NO"
103
+ log_probs = log_probs_list[yes_no_responses.index("NO")]
104
+ log_info(f"[<YES/NO RETURN>]: yes_choice: {yes_choice}, confidence: {yes_no_responses.count('YES')/len(yes_no_responses)}")
105
+ return response_texts[yes_choice], yes_choice, yes_no_responses.count("YES")/len(yes_no_responses), log_probs, total_tokens
106
+
107
+
108
+
109
+ def expert_response_confidence_score(messages, self_consistency=1, **kwargs):
110
+ """
111
+ Numerical Abstain
112
+ """
113
+ log_info(f"++++++++++++++++++++ Start of Numerical Confidence Score [expert_basics.py:expert_response_confidence_score()] ++++++++++++++++++++")
114
+ log_info(f"[<CONF SCORE PROMPT>] [len(messages)={len(messages)}] (messages[-1]):\n{messages[-1]['content']}")
115
+
116
+ conf_scores, log_probs_list, response_texts = [], {}, {}
117
+ total_tokens = {"input_tokens": 0, "output_tokens": 0}
118
+ for i in range(self_consistency):
119
+ log_info(f"-------------------- Self-Consistency Iteration {i+1} --------------------")
120
+ response_text, log_probs, num_tokens = get_response(messages, **kwargs)
121
+ total_tokens["input_tokens"] += num_tokens["input_tokens"]
122
+ total_tokens["output_tokens"] += num_tokens["output_tokens"]
123
+ if not response_text:
124
+ log_info("[<CONF SCORE LM RES>]: " + "No response.")
125
+ continue
126
+ log_info("[<CONF SCORE LM RES>]: " + response_text)
127
+
128
+ conf_score = parse_confidence_score(response_text)
129
+ conf_scores.append(conf_score)
130
+ log_probs_list[conf_score] = log_probs
131
+ response_texts[conf_score] = response_text
132
+ log_info(f"[<CONF SCORE PARSED>]: {conf_score}")
133
+
134
+ if len(conf_scores) > 0:
135
+ avg_conf_score = sum(conf_scores) / len(conf_scores)
136
+ # response_text = "CONFIDENCE SCORE: " + str(avg_conf_score)
137
+ temp = [abs(r-avg_conf_score) for r in conf_scores]
138
+ response_text = response_texts[conf_scores[temp.index(min(temp))]]
139
+ log_probs = log_probs_list[conf_scores[temp.index(min(temp))]]
140
+ else:
141
+ avg_conf_score, response_text, log_probs = 0, "No response.", None
142
+ log_info(f"[<CONF SCORE RETURN>] (average conf score): {avg_conf_score}")
143
+ return response_text, avg_conf_score, log_probs, total_tokens
144
+
145
+
146
+
147
+ def expert_response_scale_score(messages, self_consistency=1, **kwargs):
148
+ """
149
+ Scale Abstain
150
+ """
151
+ log_info(f"++++++++++++++++++++ Start of Scale Confidence Score [expert_basics.py:expert_response_scale_score()] ++++++++++++++++++++")
152
+ log_info(f"[<SCALE SCORE PROMPT>] [len(messages)={len(messages)}] (messages[-1]):\n{messages[-1]['content']}")
153
+
154
+ conf_scores, log_probs_list, response_texts = [], {}, {}
155
+ total_tokens = {"input_tokens": 0, "output_tokens": 0}
156
+ for i in range(self_consistency):
157
+ log_info(f"-------------------- Self-Consistency Iteration {i+1} --------------------")
158
+ response_text, log_probs, num_tokens = get_response(messages, **kwargs)
159
+ total_tokens["input_tokens"] += num_tokens["input_tokens"]
160
+ total_tokens["output_tokens"] += num_tokens["output_tokens"]
161
+ if not response_text:
162
+ log_info("[<SCALE SCORE LM RES>]: " + "No response.")
163
+ continue
164
+ log_info("[<SCALE SCORE LM RES>]: " + response_text)
165
+
166
+ conf_score = parse_likert_scale(response_text)
167
+ conf_scores.append(conf_score)
168
+ log_probs_list[conf_score] = log_probs
169
+ response_texts[conf_score] = response_text
170
+ log_info("[<SCALE SCORE PARSED>]: " + str(conf_score))
171
+
172
+ if len(conf_scores) > 0:
173
+ avg_conf_score = sum(conf_scores) / len(conf_scores)
174
+ temp = [abs(r-avg_conf_score) for r in conf_scores]
175
+ response_text = response_texts[conf_scores[temp.index(min(temp))]]
176
+ log_probs = log_probs_list[conf_scores[temp.index(min(temp))]]
177
+ else:
178
+ avg_conf_score, response_text, log_probs = 0, "No response.", None
179
+ log_info(f"[<SCALE SCORE RETURN>] (average conf score]): {avg_conf_score}")
180
+ return response_text, avg_conf_score, log_probs, total_tokens
181
+
182
+
183
+
184
+ def expert_response_choice(messages, options_dict, **kwargs):
185
+ """
186
+ Get intermediate answer choice regardless of abstention decision
187
+ """
188
+ log_info(f"++++++++++++++++++++ Start of Multiple Chocie Decision [expert_basics.py:expert_response_choice()] ++++++++++++++++++++")
189
+ log_info(f"[<CHOICE PROMPT>] [len(messages)={len(messages)}] (messages[-1]):\n{messages[-1]['content']}")
190
+ response_text, log_probs, num_tokens = get_response(messages, **kwargs)
191
+ if not response_text:
192
+ log_info("[<CHOICE LM RES>]: " + "No response.")
193
+ return "No response.", None, num_tokens
194
+ log_info("[<CHOICE LM RES>]: " + response_text)
195
+
196
+ letter_choice = parse_choice(response_text, options_dict)
197
+ if letter_choice:
198
+ log_info("[<CHOICE PARSED>]: " + letter_choice)
199
+ else:
200
+ log_info("[<CHOICE PARSED>]: " + "FAILED TO PARSE.")
201
+
202
+ return response_text, letter_choice, num_tokens
203
+
204
+
205
+
206
+ def expert_response_question(messages, **kwargs):
207
+ """
208
+ Get follow-up question
209
+ """
210
+ log_info(f"++++++++++++++++++++ Start of Question Generator [expert_basics.py:expert_response_question()] ++++++++++++++++++++")
211
+ log_info(f"[<QUESTION GENERATOR PROMPT>] [len(messages)={len(messages)}] (messages[-1]):\n{messages[-1]['content']}")
212
+ response_text, log_probs, num_tokens = get_response(messages, **kwargs)
213
+ if not response_text:
214
+ log_info("[<QUESTION GENERATOR LM RES>]: " + "No response.")
215
+ return "No response.", None, num_tokens
216
+ log_info("[<QUESTION GENERATOR LM RES>]: " + response_text)
217
+
218
+ atomic_question = parse_atomic_question(response_text)
219
+ if atomic_question:
220
+ log_info("[<QUESTION GENERATOR PARSED>]: " + atomic_question)
221
+ else:
222
+ log_info("[<QUESTION GENERATOR PARSED>]: " + "FAILED TO PARSE.")
223
+
224
+ return response_text, atomic_question, num_tokens
225
+
226
+
227
+
228
+ ############################
229
+ # Helper Functions for Parsing Responses
230
+ ############################
231
+
232
+ def parse_atomic_question(response_text):
233
+ questions = []
234
+ for line in response_text.split("\n"):
235
+ if '?' in line:
236
+ questions.append(line.split(":")[-1].strip())
237
+
238
+ if len(questions) == 0:
239
+ log_info("can't find question in answer: {}".format(response_text), type="error")
240
+ return None
241
+
242
+ atomic_question = questions[-1].replace("'", "").replace('"', "").strip()
243
+ return atomic_question
244
+
245
+ def parse_choice(response_text, options_dict):
246
+ if response_text.strip() in ["A", "B", "C", "D"]:
247
+ return response_text.strip()
248
+ for response_line in response_text.split("\n"):
249
+ for op_letter, op_text in options_dict.items():
250
+ if op_text.lower() in response_line.lower():
251
+ log_info(f"....Found {op_text} in response line: {response_line}")
252
+ return op_letter
253
+ for op_letter in options_dict.keys():
254
+ if op_letter in [token for token in re.sub(r"[,.;@#()?!'/&:$]+\ *", " ", response_line).split(' ')]:
255
+ # op_letter_str = str(op_letter) if op_letter else "none"
256
+ # response_line_str = str(response_line) if response_line else "none"
257
+ log_info(f"....Found {op_letter} in response line: {response_line}")
258
+ return op_letter
259
+ log_info("can't parse choice: {}".format(response_text), type="error")
260
+ return None
261
+
262
+ def parse_yes_no(response_text):
263
+ temp_processed_response = response_text.lower().replace('.','').replace(',','').replace(';','').replace(':','').split("DECISION:")[-1].strip()
264
+ yes_answer = "yes" in temp_processed_response
265
+ no_answer = "no" in temp_processed_response
266
+ if yes_answer == no_answer:
267
+ yes_choice = "NO"
268
+ log_info("can't parse yes/no abstain answer: {}".format(response_text), type="error")
269
+ if yes_answer: yes_choice = "YES"
270
+ elif no_answer: yes_choice = "NO"
271
+ return yes_choice
272
+
273
+ def parse_confidence_score(response_text):
274
+ # parse the probability
275
+ float_regex = re.compile(r'\d+\.\d+')
276
+ scores = re.findall(float_regex, response_text)
277
+
278
+ if len(scores) == 0:
279
+ log_info("can't parse confidence score - answer: {}".format(response_text), type="error")
280
+ score = round(0.2 + (random.random() - random.random()) * 0.2, 4)
281
+ return score
282
+
283
+ prob = float(scores[-1])
284
+ if len(scores) > 1: logging.warning("more than one confidence score - using last: {}".format(response_text))
285
+ if prob > 1: logging.warning("confidence score > 1: {}".format(response_text))
286
+ return prob
287
+
288
+ def parse_likert_scale(response_text):
289
+ temp_processed_response = response_text.lower().replace('.','').replace(',','').replace(';','').replace(':','')
290
+ if "very confident" in temp_processed_response:
291
+ conf_score = 5
292
+ elif "somewhat confident" in temp_processed_response:
293
+ conf_score = 4
294
+ elif "neither confident nor unconfident" in temp_processed_response:
295
+ conf_score = 3
296
+ elif "neither confident or unconfident" in temp_processed_response:
297
+ conf_score = 3
298
+ elif "somewhat unconfident" in temp_processed_response:
299
+ conf_score = 2
300
+ elif "very unconfident" in temp_processed_response:
301
+ conf_score = 1
302
+ else:
303
+ conf_score = 0
304
+ log_info("can't parse likert confidence score: {}".format(response_text), type="error")
305
+ return conf_score
src/expert_functions.py ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import prompts
2
+ import expert_basics
3
+ import logging
4
+
5
+ PROB_THRESHOLD = 0.8
6
+ SCALE_THRESHOLD = 4.0
7
+
8
+ def answer_to_idx(answer):
9
+ return ord(answer) - ord("A")
10
+
11
+ def log_info(message, logger="detail_logger", print_to_std=False):
12
+ if type(logger) == str and logger in logging.getLogger().manager.loggerDict:
13
+ logger = logging.getLogger(logger)
14
+ if logger: logger.info(message)
15
+ if print_to_std: print(message + "\n")
16
+
17
+
18
+
19
+ def fixed_abstention_decision(max_depth, patient_state, inquiry, options_dict, **kwargs):
20
+ """
21
+ Fixed abstention strategy based on the current interaction length.
22
+ If the interaction length is less than the max depth, abstain, otherwise answer.
23
+ """
24
+ # first get the model's abstention decision
25
+ log_info(f"++++++++++++++++++++ Start of Fixed Abstention [expert_functions.py:fixed_abstention_decision()] ++++++++++++++++++++")
26
+ abstain_decision = len(patient_state['interaction_history']) < max_depth
27
+ conf_score = 1 if abstain_decision else 0
28
+ log_info(f"[ABSTENTION RESPONSE]: {abstain_decision}\n")
29
+
30
+ # second, no matter what the model's abstention decision is, get an intermediate answer for evaluation and analysis
31
+ patient_info = patient_state["initial_info"]
32
+ conv_log = '\n'.join([f"{prompts.expert_system['question_word']}: {qa['question']}\n{prompts.expert_system['answer_word']}: {qa['answer']}" for qa in patient_state["interaction_history"]])
33
+ options_text = f'A: {options_dict["A"]}, B: {options_dict["B"]}, C: {options_dict["C"]}, D: {options_dict["D"]}'
34
+
35
+ prompt_answer = prompts.expert_system["curr_template"].format(patient_info, conv_log if conv_log != '' else 'None', inquiry, options_text, prompts.expert_system["answer"])
36
+ messages_answer = [
37
+ {"role": "system", "content": prompts.expert_system["meditron_system_msg"]},
38
+ {"role": "user", "content": prompt_answer}
39
+ ]
40
+ response_text, letter_choice, num_tokens = expert_basics.expert_response_choice(messages_answer, options_dict, **kwargs)
41
+
42
+ log_info(f"[FIXED ABSTAIN RETURN]: abstain: {abstain_decision}, confidence: {conf_score}, letter_choice: {letter_choice}, usage: {num_tokens}\n")
43
+ return {
44
+ "abstain": abstain_decision,
45
+ "confidence": conf_score,
46
+ "usage": num_tokens,
47
+ "messages": messages_answer,
48
+ "letter_choice": letter_choice,
49
+ }
50
+
51
+
52
+
53
+ def implicit_abstention_decision(patient_state, rationale_generation, inquiry, options_dict, **kwargs):
54
+ """
55
+ Implicit abstention strategy based on the current patient state.
56
+ This function uses the expert system to make a decision on whether to abstain or not based on the current patient state.
57
+ """
58
+ # Get the response from the expert system
59
+ prompt_key = "implicit_RG" if rationale_generation else "implicit"
60
+ abstain_task_prompt = prompts.expert_system[prompt_key]
61
+
62
+ patient_info = patient_state["initial_info"]
63
+ conv_log = '\n'.join([f"{prompts.expert_system['question_word']}: {qa['question']}\n{prompts.expert_system['answer_word']}: {qa['answer']}" for qa in patient_state["interaction_history"]])
64
+ options_text = f'A: {options_dict["A"]}, B: {options_dict["B"]}, C: {options_dict["C"]}, D: {options_dict["D"]}'
65
+
66
+ # first get the model's abstention decision
67
+ prompt_abstain = prompts.expert_system["curr_template"].format(patient_info, conv_log if conv_log != '' else 'None', inquiry, options_text, abstain_task_prompt)
68
+
69
+ messages = [
70
+ {"role": "system", "content": prompts.expert_system["meditron_system_msg"]},
71
+ {"role": "user", "content": prompt_abstain}
72
+ ]
73
+ response_text, atomic_question, letter_choice, conf_score, top_logprobs, num_tokens = expert_basics.expert_response_choice_or_question(messages, options_dict, **kwargs)
74
+ log_info(f"[ABSTENTION PROMPT]: {messages}")
75
+ log_info(f"[ABSTENTION RESPONSE]: {response_text}\n")
76
+ messages.append({"role": "assistant", "content": response_text})
77
+
78
+ if atomic_question != None: abstain_decision = True # if the model generates a question, it is abstaining from answering, therefore abstain decision is True
79
+ elif letter_choice != None: abstain_decision = False # if the model generates an answer, it is not abstaining from answering, therefore abstain decision is False
80
+ else: abstain_decision = True # if the model generates neither an answer nor a question, it is abstaining from answering, therefore abstain decision is True
81
+
82
+ # second, no matter what the model's abstention decision is, get an intermediate answer for evaluation and analysis
83
+ # note that we get this for free if implicit abstain already chooses an answer instead of a question
84
+ if letter_choice == None:
85
+ prompt_answer = prompts.expert_system["curr_template"].format(patient_info, conv_log if conv_log != '' else 'None', inquiry, options_text, prompts.expert_system["answer"])
86
+ messages_answer = [
87
+ {"role": "system", "content": prompts.expert_system["meditron_system_msg"]},
88
+ {"role": "user", "content": prompt_answer}
89
+ ]
90
+ response_text, letter_choice, num_tokens_answer = expert_basics.expert_response_choice(messages_answer, options_dict, **kwargs)
91
+ num_tokens["input_tokens"] += num_tokens_answer["input_tokens"]
92
+ num_tokens["output_tokens"] += num_tokens_answer["output_tokens"]
93
+
94
+ log_info(f"[IMPLICIT ABSTAIN RETURN]: abstain: {abstain_decision}, confidence: {conf_score}, letter_choice: {letter_choice}, usage: {num_tokens}, atomic_question: {atomic_question}\n")
95
+ return {
96
+ "abstain": abstain_decision,
97
+ "confidence": conf_score,
98
+ "usage": num_tokens,
99
+ "messages": messages,
100
+ "letter_choice": letter_choice,
101
+ "atomic_question": atomic_question,
102
+ }
103
+
104
+
105
+
106
+ def binary_abstention_decision(patient_state, rationale_generation, inquiry, options_dict, **kwargs):
107
+ """
108
+ Binary abstention strategy based on the current patient state.
109
+ This function prompts the user to make a binary decision on whether to abstain or not based on the current patient state.
110
+ """
111
+ # Get the response from the expert system
112
+ prompt_key = "binary_RG" if rationale_generation else "binary"
113
+ abstain_task_prompt = prompts.expert_system[prompt_key]
114
+
115
+ patient_info = patient_state["initial_info"]
116
+ conv_log = '\n'.join([f"{prompts.expert_system['question_word']}: {qa['question']}\n{prompts.expert_system['answer_word']}: {qa['answer']}" for qa in patient_state["interaction_history"]])
117
+ options_text = f'A: {options_dict["A"]}, B: {options_dict["B"]}, C: {options_dict["C"]}, D: {options_dict["D"]}'
118
+
119
+ # first get the model's abstention decision
120
+ prompt_abstain = prompts.expert_system["curr_template"].format(patient_info, conv_log if conv_log != '' else 'None', inquiry, options_text, abstain_task_prompt)
121
+
122
+ messages = [
123
+ {"role": "system", "content": prompts.expert_system["meditron_system_msg"]},
124
+ {"role": "user", "content": prompt_abstain}
125
+ ]
126
+ response_text, abstain_decision, conf_score, log_probs, num_tokens = expert_basics.expert_response_yes_no(messages, **kwargs)
127
+ abstain_decision = abstain_decision.lower() == 'no'
128
+ log_info(f"[ABSTENTION PROMPT]: {messages}")
129
+ log_info(f"[ABSTENTION RESPONSE]: {response_text}\n")
130
+ messages.append({"role": "assistant", "content": response_text})
131
+
132
+ # second, no matter what the model's abstention decision is, get an intermediate answer for evaluation and analysis
133
+ prompt_answer = prompts.expert_system["curr_template"].format(patient_info, conv_log if conv_log != '' else 'None', inquiry, options_text, prompts.expert_system["answer"])
134
+ messages_answer = [
135
+ {"role": "system", "content": prompts.expert_system["meditron_system_msg"]},
136
+ {"role": "user", "content": prompt_answer}
137
+ ]
138
+ response_text, letter_choice, num_tokens_answer = expert_basics.expert_response_choice(messages_answer, options_dict, **kwargs)
139
+ num_tokens["input_tokens"] += num_tokens_answer["input_tokens"]
140
+ num_tokens["output_tokens"] += num_tokens_answer["output_tokens"]
141
+
142
+ log_info(f"[BINARY ABSTAIN RETURN]: abstain: {abstain_decision}, confidence: {conf_score}, letter_choice: {letter_choice}, usage: {num_tokens}\n")
143
+ return {
144
+ "abstain": abstain_decision,
145
+ "confidence": conf_score,
146
+ "usage": num_tokens,
147
+ "messages": messages,
148
+ "letter_choice": letter_choice,
149
+ }
150
+
151
+
152
+
153
+ def numerical_abstention_decision(patient_state, rationale_generation, inquiry, options_dict, **kwargs):
154
+ """
155
+ Numerical abstention strategy based on the current patient state.
156
+ This function prompts the model to produce a numerical confidence score of how confident it is in its decision, then ask whether it wants to proceed
157
+ """
158
+
159
+ # Get the response from the expert system
160
+ prompt_key = "numerical_RG" if rationale_generation else "numerical"
161
+ abstain_task_prompt = prompts.expert_system[prompt_key]
162
+
163
+ patient_info = patient_state["initial_info"]
164
+ conv_log = '\n'.join([f"{prompts.expert_system['question_word']}: {qa['question']}\n{prompts.expert_system['answer_word']}: {qa['answer']}" for qa in patient_state["interaction_history"]])
165
+ options_text = f'A: {options_dict["A"]}, B: {options_dict["B"]}, C: {options_dict["C"]}, D: {options_dict["D"]}'
166
+
167
+ # first get the model's abstention decision
168
+ prompt_abstain = prompts.expert_system["curr_template"].format(patient_info, conv_log if conv_log != '' else 'None', inquiry, options_text, abstain_task_prompt)
169
+
170
+ messages = [
171
+ {"role": "system", "content": prompts.expert_system["meditron_system_msg"]},
172
+ {"role": "user", "content": prompt_abstain}
173
+ ]
174
+ response_text, conf_score, log_probs, num_tokens = expert_basics.expert_response_confidence_score(messages, **kwargs)
175
+ messages.append({"role": "assistant", "content": response_text})
176
+
177
+ messages.append({"role": "user", "content": prompts.expert_system["yes_no"]})
178
+ # third return is supposed to be the conf_score in the binary setup, but we don't use it here because has conf score from last turn.
179
+ response_text, abstain_decision, _, log_probs, num_tokens_2 = expert_basics.expert_response_yes_no(messages, **kwargs)
180
+ abstain_decision = abstain_decision.lower() == 'no'
181
+ num_tokens["input_tokens"] += num_tokens_2["input_tokens"]
182
+ num_tokens["output_tokens"] += num_tokens_2["output_tokens"]
183
+ log_info(f"[ABSTENTION PROMPT]: {messages}")
184
+ log_info(f"[ABSTENTION RESPONSE]: {response_text}\n")
185
+ messages.append({"role": "assistant", "content": response_text})
186
+
187
+
188
+ # second, no matter what the model's abstention decision is, get an intermediate answer for evaluation and analysis
189
+ prompt_answer = prompts.expert_system["curr_template"].format(patient_info, conv_log if conv_log != '' else 'None', inquiry, options_text, prompts.expert_system["answer"])
190
+ messages_answer = [
191
+ {"role": "system", "content": prompts.expert_system["meditron_system_msg"]},
192
+ {"role": "user", "content": prompt_answer}
193
+ ]
194
+ response_text, letter_choice, num_tokens_answer = expert_basics.expert_response_choice(messages_answer, options_dict, **kwargs)
195
+ num_tokens["input_tokens"] += num_tokens_answer["input_tokens"]
196
+ num_tokens["output_tokens"] += num_tokens_answer["output_tokens"]
197
+
198
+ log_info(f"[NUMERICAL ABSTAIN RETURN]: abstain: {abstain_decision}, confidence: {conf_score}, letter_choice: {letter_choice}, usage: {num_tokens}\n")
199
+ return {
200
+ "abstain": abstain_decision,
201
+ "confidence": conf_score,
202
+ "usage": num_tokens,
203
+ "messages": messages,
204
+ "letter_choice": letter_choice,
205
+ }
206
+
207
+
208
+
209
+ def numcutoff_abstention_decision(patient_state, rationale_generation, inquiry, options_dict, abstain_threshold, **kwargs):
210
+ """
211
+ Numcutoff abstention strategy based on the current patient state.
212
+ This function prompts the model to produce a numerical confidence score of how confident it is in its decision, then decide abstention based on arbitrarily set threshold
213
+ """
214
+ if not abstain_threshold: abstain_threshold = PROB_THRESHOLD
215
+
216
+ # Get the response from the expert system
217
+ prompt_key = "numcutoff_RG" if rationale_generation else "numcutoff"
218
+ abstain_task_prompt = prompts.expert_system[prompt_key]
219
+
220
+ patient_info = patient_state["initial_info"]
221
+ conv_log = '\n'.join([f"{prompts.expert_system['question_word']}: {qa['question']}\n{prompts.expert_system['answer_word']}: {qa['answer']}" for qa in patient_state["interaction_history"]])
222
+ options_text = f'A: {options_dict["A"]}, B: {options_dict["B"]}, C: {options_dict["C"]}, D: {options_dict["D"]}'
223
+
224
+ # first get the model's abstention decision
225
+ prompt_abstain = prompts.expert_system["curr_template"].format(patient_info, conv_log if conv_log != '' else 'None', inquiry, options_text, abstain_task_prompt)
226
+
227
+ messages = [
228
+ {"role": "system", "content": prompts.expert_system["meditron_system_msg"]},
229
+ {"role": "user", "content": prompt_abstain}
230
+ ]
231
+ response_text, conf_score, log_probs, num_tokens = expert_basics.expert_response_confidence_score(messages, abstain_threshold=abstain_threshold, **kwargs)
232
+ abstain_decision = conf_score < abstain_threshold
233
+ log_info(f"[ABSTENTION PROMPT]: {messages}")
234
+ log_info(f"[ABSTENTION RESPONSE]: {response_text}\n")
235
+ messages.append({"role": "assistant", "content": response_text})
236
+
237
+ # second, no matter what the model's abstention decision is, get an intermediate answer for evaluation and analysis
238
+ prompt_answer = prompts.expert_system["curr_template"].format(patient_info, conv_log if conv_log != '' else 'None', inquiry, options_text, prompts.expert_system["answer"])
239
+ messages_answer = [
240
+ {"role": "system", "content": prompts.expert_system["meditron_system_msg"]},
241
+ {"role": "user", "content": prompt_answer}
242
+ ]
243
+ response_text, letter_choice, num_tokens_answer = expert_basics.expert_response_choice(messages_answer, options_dict, **kwargs)
244
+ num_tokens["input_tokens"] += num_tokens_answer["input_tokens"]
245
+ num_tokens["output_tokens"] += num_tokens_answer["output_tokens"]
246
+
247
+ log_info(f"[NUMCUTOFF ABSTAIN RETURN]: abstain: {abstain_decision}, confidence: {conf_score}, letter_choice: {letter_choice}, usage: {num_tokens}\n")
248
+ return {
249
+ "abstain": abstain_decision,
250
+ "confidence": conf_score,
251
+ "usage": num_tokens,
252
+ "messages": messages,
253
+ "letter_choice": letter_choice,
254
+ }
255
+
256
+
257
+
258
+ def scale_abstention_decision(patient_state, rationale_generation, inquiry, options_dict, abstain_threshold, **kwargs):
259
+ """
260
+ Likert abstention strategy based on the current patient state.
261
+ This function prompts the model to produce a likert scale confidence score of how confident it is in its decision, then decide abstention based on a cutoff
262
+ """
263
+ if not abstain_threshold: abstain_threshold = SCALE_THRESHOLD
264
+
265
+ # Get the response from the expert system
266
+ prompt_key = "scale_RG" if rationale_generation else "scale"
267
+ abstain_task_prompt = prompts.expert_system[prompt_key]
268
+
269
+ patient_info = patient_state["initial_info"]
270
+ conv_log = '\n'.join([f"{prompts.expert_system['question_word']}: {qa['question']}\n{prompts.expert_system['answer_word']}: {qa['answer']}" for qa in patient_state["interaction_history"]])
271
+ options_text = f'A: {options_dict["A"]}, B: {options_dict["B"]}, C: {options_dict["C"]}, D: {options_dict["D"]}'
272
+
273
+ # first get the model's abstention decision
274
+ prompt_abstain = prompts.expert_system["curr_template"].format(patient_info, conv_log if conv_log != '' else 'None', inquiry, options_text, abstain_task_prompt)
275
+
276
+ messages = [
277
+ {"role": "system", "content": prompts.expert_system["meditron_system_msg"]},
278
+ {"role": "user", "content": prompt_abstain}
279
+ ]
280
+ response_text, conf_score, log_probs, num_tokens = expert_basics.expert_response_scale_score(messages, abstain_threshold=abstain_threshold, **kwargs)
281
+ abstain_decision = conf_score < abstain_threshold
282
+ log_info(f"[ABSTENTION PROMPT]: {messages}")
283
+ log_info(f"[ABSTENTION RESPONSE]: {response_text}\n")
284
+ messages.append({"role": "assistant", "content": response_text})
285
+
286
+ # second, no matter what the model's abstention decision is, get an intermediate answer for evaluation and analysis
287
+ prompt_answer = prompts.expert_system["curr_template"].format(patient_info, conv_log if conv_log != '' else 'None', inquiry, options_text, prompts.expert_system["answer"])
288
+ messages_answer = [
289
+ {"role": "system", "content": prompts.expert_system["meditron_system_msg"]},
290
+ {"role": "user", "content": prompt_answer}
291
+ ]
292
+ response_text, letter_choice, num_tokens_answer = expert_basics.expert_response_choice(messages_answer, options_dict, **kwargs)
293
+ num_tokens["input_tokens"] += num_tokens_answer["input_tokens"]
294
+ num_tokens["output_tokens"] += num_tokens_answer["output_tokens"]
295
+
296
+ log_info(f"[SCALE ABSTAIN RETURN]: abstain: {abstain_decision}, confidence: {conf_score}, letter_choice: {letter_choice}, usage: {num_tokens}\n")
297
+ return {
298
+ "abstain": abstain_decision,
299
+ "confidence": conf_score,
300
+ "usage": num_tokens,
301
+ "messages": messages,
302
+ "letter_choice": letter_choice,
303
+ }
304
+
305
+
306
+
307
+ def question_generation(patient_state, inquiry, options_dict, messages, independent_modules, **kwargs):
308
+ task_prompt = prompts.expert_system["atomic_question_improved"]
309
+
310
+ if independent_modules:
311
+ patient_info = patient_state["initial_info"]
312
+ conv_log = '\n'.join([f"{prompts.expert_system['question_word']}: {qa['question']}\n{prompts.expert_system['answer_word']}: {qa['answer']}" for qa in patient_state["interaction_history"]])
313
+ options_text = f'A: {options_dict["A"]}, B: {options_dict["B"]}, C: {options_dict["C"]}, D: {options_dict["D"]}'
314
+ prompt = prompts.expert_system["curr_template"].format(patient_info, conv_log, inquiry, options_text, task_prompt)
315
+
316
+ messages = [
317
+ {"role": "system", "content": prompts.expert_system["meditron_system_msg"]},
318
+ {"role": "user", "content": prompt}
319
+ ]
320
+ else:
321
+ messages.append({"role": "user", "content": task_prompt})
322
+
323
+ response_text, atomic_question, num_tokens = expert_basics.expert_response_question(messages, **kwargs)
324
+ log_info(f"[ATOMIC QUESTION PROMPT]: {messages}")
325
+ log_info(f"[ATOMIC QUESTION RESPONSE]: {atomic_question}\n")
326
+ messages.append({"role": "assistant", "content": atomic_question})
327
+
328
+ log_info(f"[ATOMIC QUESTION RETURN]: {atomic_question}, usage: {num_tokens}\n")
329
+ return {
330
+ "atomic_question": atomic_question,
331
+ "messages": messages,
332
+ "usage": num_tokens,
333
+ }
src/helper.py ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import logging
3
+ from keys import mykey
4
+
5
+ # A dictionary to cache models and tokenizers to avoid reloading
6
+
7
+ global models
8
+ models = {}
9
+
10
+ def log_info(message, logger_name="message_logger", print_to_std=False, mode="info"):
11
+ logger = logging.getLogger(logger_name)
12
+ if logger:
13
+ if mode == "error": logger.error(message)
14
+ if mode == "warning": logger.warning(message)
15
+ else: logger.info(message)
16
+ if print_to_std: print(message + "\n")
17
+
18
+ class ModelCache:
19
+ def __init__(self, model_name, use_vllm=False, use_api=None, **kwargs):
20
+ self.model_name = model_name
21
+ self.use_vllm = use_vllm
22
+ self.use_api = use_api
23
+ self.model = None
24
+ self.tokenizer = None
25
+ self.terminators = None
26
+ self.client = None
27
+ self.args = kwargs
28
+ self.load_model_and_tokenizer()
29
+
30
+ def load_model_and_tokenizer(self):
31
+ if self.use_api == "openai":
32
+ from openai import OpenAI
33
+ self.api_account = self.args.get("api_account", "openai")
34
+ self.client = OpenAI(api_key=mykey[self.api_account]) # Setup API key appropriately in keys.py
35
+ elif self.use_vllm:
36
+ try:
37
+ from vllm import LLM
38
+ enable_prefix_caching = self.args.get("enable_prefix_caching", False)
39
+ self.model = LLM(model=self.model_name, enable_prefix_caching=enable_prefix_caching)
40
+ from transformers import AutoTokenizer
41
+ self.tokenizer = AutoTokenizer.from_pretrained(self.model_name)
42
+ self.tokenizer.pad_token = self.tokenizer.eos_token
43
+ self.tokenizer.pad_token_id = self.tokenizer.eos_token_id
44
+ self.terminators = [self.tokenizer.eos_token_id, self.tokenizer.convert_tokens_to_ids("<|eot_id|>")]
45
+ except Exception as e:
46
+ log_info(f"[ERROR] [{self.model_name}]: If using a custom local model, it is not compatible with VLLM, will load using Huggingfcae and you can ignore this error: {str(e)}", mode="error")
47
+ self.use_vllm = False
48
+ if not self.use_vllm and self.use_api != "openai":
49
+ from transformers import AutoModelForCausalLM, AutoTokenizer
50
+ self.tokenizer = AutoTokenizer.from_pretrained(self.model_name)
51
+ self.model = AutoModelForCausalLM.from_pretrained(self.model_name)
52
+ self.model.eval() # Set the model to evaluation mode
53
+ self.tokenizer.pad_token = self.tokenizer.eos_token
54
+ self.tokenizer.pad_token_id = self.tokenizer.eos_token_id
55
+ self.terminators = [self.tokenizer.eos_token_id, self.tokenizer.convert_tokens_to_ids("<|eot_id|>")]
56
+
57
+ def generate(self, messages):
58
+ log_info(f"[{self.model_name}][INPUT]: {messages}")
59
+
60
+ self.temperature = self.args.get("temperature", 0.6)
61
+ self.max_tokens = self.args.get("max_tokens", 256)
62
+ self.top_p = self.args.get("top_p", 0.9)
63
+ self.top_logprobs = self.args.get("top_logprobs", 0)
64
+
65
+ if self.use_api == "openai": self.openai_generate(messages)
66
+ elif self.use_vllm: return self.vllm_generate(messages)
67
+ else: return self.huggingface_generate(messages)
68
+
69
+ def huggingface_generate(self, messages):
70
+ try:
71
+ inputs = self.tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(self.model.device)
72
+ except:
73
+ # Join messages into a single prompt for general language models
74
+ log_info(f"[{self.model_name}]: Could not apply chat template to messages.", mode="warning")
75
+ prompt = "\n\n".join([m['content'] for m in messages])
76
+ inputs = self.tokenizer(prompt, return_tensors="pt").to(self.model.device)
77
+
78
+ outputs = self.model.generate(
79
+ inputs,
80
+ do_sample=True,
81
+ max_new_tokens=self.max_tokens,
82
+ temperature=self.temperature,
83
+ top_p=self.top_p,
84
+ pad_token_id=self.tokenizer.pad_token_id,
85
+ eos_token_id=self.terminators
86
+ )
87
+ # TODO: If top_logprobs > 0, return logprobs of generation
88
+ response_text = self.tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True)
89
+ usage = {"input_tokens": inputs.shape[-1], "output_tokens": outputs.shape[-1]-inputs.shape[-1]}
90
+ output_dict = {'response_text': response_text, 'usage': usage}
91
+
92
+ log_info(f"[{self.model_name}][OUTPUT]: {output_dict}")
93
+ return response_text, None, usage
94
+
95
+ def vllm_generate(self, messages):
96
+ try:
97
+ inputs = self.tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
98
+ except:
99
+ # Join messages into a single prompt for general language models
100
+ log_info(f"[{self.model_name}]: Could not apply chat template to messages.", mode="warning")
101
+ inputs = "\n\n".join([m['content'] for m in messages])
102
+ # inputs = self.tokenizer(prompt, return_tensors="pt").to(self.model.device)
103
+
104
+ from vllm import SamplingParams
105
+ frequency_penalty = self.args.get("frequency_penalty", 0)
106
+ presence_penalty = self.args.get("presense_penalty", 0)
107
+ sampling_params = SamplingParams(temperature=self.temperature, max_tokens=self.max_tokens, top_p=self.top_p, logprobs=self.top_logprobs,
108
+ frequency_penalty=frequency_penalty, presence_penalty=presence_penalty)
109
+
110
+ outputs = self.model.generate(inputs, sampling_params)
111
+ response_text = outputs[0].outputs[0].text
112
+ logprobs = outputs[0].outputs[0].cumulative_logprob
113
+ # TODO: If top_logprobs > 0, return logprobs of generation
114
+ # if self.top_logprobs > 0: logprobs = outputs[0].outputs[0].logprobs
115
+ usage = {"input_tokens": len(outputs[0].prompt_token_ids), "output_tokens": len(outputs[0].outputs[0].token_ids)}
116
+ output_dict = {'response_text': response_text, 'usage': usage}
117
+
118
+ log_info(f"[{self.model_name}][OUTPUT]: {output_dict}")
119
+ return response_text, logprobs, usage
120
+
121
+ def openai_generate(self, messages):
122
+ if self.top_logprobs == 0:
123
+ response = self.client.chat.completions.create(
124
+ model=self.model_name,
125
+ messages=messages,
126
+ temperature=self.temperature,
127
+ max_tokens=self.max_tokens,
128
+ top_p=self.top_p
129
+ )
130
+ else:
131
+ response = self.client.chat.completions.create(
132
+ model=self.model_name,
133
+ messages=messages,
134
+ temperature=self.temperature,
135
+ max_tokens=self.max_tokens,
136
+ top_p=self.top_p,
137
+ logprobs=True,
138
+ top_logprobs=self.top_logprobs
139
+ )
140
+
141
+ num_input_tokens = response["usage"]["prompt_tokens"]
142
+ num_output_tokens = response["usage"]["completion_tokens"]
143
+ response_text = response.choices[0].text.strip()
144
+ log_probs = response.choices[0].logprobs.top_logprobs if self.top_logprobs > 0 else None
145
+
146
+ log_info(f"[{self.model_name}][OUTPUT]: {response}")
147
+ return response_text, log_probs, {"input_tokens": num_input_tokens, "output_tokens": num_output_tokens}
148
+
149
+
150
+ def get_response(messages, model_name, use_vllm=False, use_api=None, **kwargs):
151
+ if 'gpt' in model_name or 'o1' in model_name: use_api = "openai"
152
+
153
+ model_cache = models.get(model_name, None)
154
+ if model_cache is None:
155
+ model_cache = ModelCache(model_name, use_vllm=use_vllm, use_api=use_api, **kwargs)
156
+ models[model_name] = model_cache
157
+
158
+ return model_cache.generate(messages)
src/keys.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ mykey = {
2
+ "mediQ": "sk-1234567890abcdef1234567890abcdef12345678",
3
+ }
src/mediQ_benchmark.py ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import time
4
+ import logging
5
+ from args import get_args
6
+ from patient import Patient
7
+ import importlib
8
+
9
+ def setup_logger(name, file):
10
+ if not file: return None
11
+ logger = logging.getLogger(name)
12
+ handler = logging.FileHandler(file, mode='a')
13
+ formatter = logging.Formatter('[%(asctime)s] [%(levelname)s] %(message)s')
14
+ handler.setFormatter(formatter)
15
+ logger.addHandler(handler)
16
+ logger.setLevel(logging.INFO)
17
+ return logger
18
+
19
+ def log_info(message, print_to_std=False):
20
+ if history_logger: history_logger.info(message)
21
+ if detail_logger: detail_logger.info(message)
22
+ if print_to_std: print(message + "\n")
23
+
24
+ def load_data(filename):
25
+ with open(filename, "r") as json_file:
26
+ json_list = list(json_file)
27
+ data = [json.loads(line) for line in json_list]
28
+ data = {item['id']: item for item in data}
29
+ return data
30
+
31
+ def main():
32
+ if os.path.exists(args.output_filename):
33
+ with open(args.output_filename, "r") as f:
34
+ lines = f.readlines()
35
+ output_data = [json.loads(line) for line in lines]
36
+ if len(lines) == 0: processed_ids = []
37
+ else: processed_ids = {sample["id"]: {"correct": sample["interactive_system"]["letter_choice"] == sample["info"]["correct_answer_idx"],
38
+ "timeout": len(sample["interactive_system"]["intermediate_choices"]) > args.max_questions,
39
+ "turns": sample["interactive_system"]["num_questions"]}
40
+ for sample in output_data}
41
+ else:
42
+ processed_ids = []
43
+
44
+ expert_module = importlib.import_module(args.expert_module)
45
+ expert_class = getattr(expert_module, args.expert_class)
46
+ patient_module = importlib.import_module(args.patient_module)
47
+ patient_class = getattr(patient_module, args.patient_class)
48
+
49
+ patient_data_path = os.path.join(args.data_dir, args.dev_filename)
50
+ patient_data = load_data(patient_data_path)
51
+
52
+ num_processed = 0
53
+ correct_history, timeout_history, turn_lengths = [], [], []
54
+
55
+ for pid, sample in patient_data.items():
56
+ if pid in processed_ids:
57
+ print(f"Skipping patient {pid} as it has already been processed.")
58
+ correct_history.append(processed_ids[pid]["correct"])
59
+ timeout_history.append(processed_ids[pid]["timeout"])
60
+ turn_lengths.append(processed_ids[pid]["turns"])
61
+ continue
62
+
63
+ log_info(f"|||||||||||||||||||| PATIENT #{pid} ||||||||||||||||||||")
64
+ letter_choice, questions, answers, temp_choice_list, temp_additional_info, sample_info = run_patient_interaction(expert_class, patient_class, sample)
65
+ log_info(f"|||||||||||||||||||| Interaction ended for patient #{pid} ||||||||||||||||||||\n\n\n")
66
+
67
+ output_dict = {
68
+ "id": pid,
69
+ "interactive_system": {
70
+ "correct": letter_choice == sample["answer_idx"],
71
+ "letter_choice": letter_choice,
72
+ "questions": questions,
73
+ "answers": answers,
74
+ "num_questions": len(questions),
75
+ "intermediate_choices": temp_choice_list,
76
+ "temp_additional_info": temp_additional_info
77
+ },
78
+ "info": sample_info,
79
+ # TODO: add additional evaluation metrics for analysis, some metrics can be found in src/evaluate.py
80
+ # "eval": {
81
+ # "confidence_scores": [],
82
+ # "repeat_question_score": [],
83
+ # "repeat_answer_score": [],
84
+ # "relevancy_score": [],
85
+ # "delta_confidence_score": [],
86
+ # "specificity_score": []
87
+ # }
88
+ }
89
+
90
+ # create the directory if it does not exist
91
+ os.makedirs(os.path.dirname(args.output_filename), exist_ok=True)
92
+ with open(args.output_filename, 'a+') as f:
93
+ f.write(json.dumps(output_dict) + '\n')
94
+
95
+ correct_history.append(letter_choice == sample["answer_idx"])
96
+ timeout_history.append(len(temp_choice_list) > args.max_questions)
97
+ turn_lengths.append(len(temp_choice_list))
98
+ num_processed += 1
99
+ accuracy = sum(correct_history) / len(correct_history) if len(correct_history) > 0 else None
100
+ timeout_rate = sum(timeout_history) / len(timeout_history) if len(timeout_history) > 0 else None
101
+ avg_turns = sum(turn_lengths) / len(turn_lengths) if len(turn_lengths) > 0 else None
102
+
103
+ results_logger.info(f'Processed {num_processed}/{len(patient_data)} patients | Accuracy: {accuracy}')
104
+ print(f"[{time.strftime('%Y-%m-%d %H:%M:%S')}] Processed {num_processed}/{len(patient_data)} patients | Accuracy: {accuracy} | Timeout Rate: {timeout_rate} | Avg. Turns: {avg_turns}")
105
+ print(f"Accuracy: {sum(correct_history)} / {len(correct_history)} = {accuracy}")
106
+ print(f"Timeout Rate: {sum(timeout_history)} / {len(timeout_history)} = {timeout_rate}")
107
+ print(f"Avg. Turns: {avg_turns}")
108
+
109
+
110
+ def run_patient_interaction(expert_class, patient_class, sample):
111
+ expert_system = expert_class(args, sample["question"], sample["options"])
112
+ patient_system = patient_class(args, sample) # Assuming the patient_system is initialized with the sample which includes necessary context
113
+ temp_choice_list = []
114
+ temp_additional_info = [] # To store optional data like confidence scores
115
+
116
+ while len(patient_system.get_questions()) < args.max_questions:
117
+ log_info(f"==================== Turn {len(patient_system.get_questions()) + 1} ====================")
118
+ patient_state = patient_system.get_state()
119
+ response_dict = expert_system.respond(patient_state)
120
+ log_info(f"[Expert System]: {response_dict}")
121
+
122
+ # Optional return values for analysis, e.g., confidence score, logprobs
123
+ temp_additional_info.append({k: v for k, v in response_dict.items() if k not in ["type", "letter_choice", "question"]})
124
+
125
+ if response_dict["type"] == "question":
126
+ # still make the Expert generate a choice based on the current state for intermediate evaluation, log the question as an intermediate choice
127
+ temp_choice_list.append(response_dict["letter_choice"])
128
+ # Patient generates an answer based on the last question asked, and add to memory
129
+ patient_response = patient_system.respond(response_dict["question"])
130
+ log_info(f"[Patient System]: {patient_response}")
131
+
132
+ elif response_dict["type"] == "choice":
133
+ expert_decision = response_dict["letter_choice"]
134
+ temp_choice_list.append(expert_decision)
135
+ sample_info = {
136
+ "initial_info": patient_system.initial_info,
137
+ "correct_answer": sample["answer"],
138
+ "correct_answer_idx": sample["answer_idx"],
139
+ "question": sample["question"],
140
+ "options": sample["options"],
141
+ "context": sample["context"],
142
+ "facts": patient_system.facts, # if the FactSelectPatient patient module is used, this will store the atomic facts the patient used to answer questions for reproducibility
143
+ }
144
+ return expert_decision, patient_system.get_questions(), patient_system.get_answers(), temp_choice_list, temp_additional_info, sample_info
145
+
146
+ else:
147
+ raise ValueError("Invalid response type from expert_system.")
148
+
149
+ # If max questions are reached and no final decision has been made
150
+ log_info(f"==================== Max Interaction Length ({args.max_questions} turns) Reached --> Force Final Answer ====================")
151
+ patient_state = patient_system.get_state()
152
+ response_dict = expert_system.respond(patient_state)
153
+ log_info(f"[Expert System]: {response_dict}")
154
+ stuck_response = response_dict["letter_choice"]
155
+ # Optional return values for analysis, e.g., confidence score, logprobs
156
+ temp_additional_info.append({k: v for k, v in response_dict.items() if k != "letter_choice"})
157
+
158
+ sample_info = {
159
+ "initial_info": patient_system.initial_info,
160
+ "correct_answer": sample["answer"],
161
+ "correct_answer_idx": sample["answer_idx"],
162
+ "question": sample["question"],
163
+ "options": sample["options"],
164
+ "context": sample["context"],
165
+ "facts": patient_system.facts, # if the FactSelectPatient patient module is used, this will store the atomic facts the patient used to answer questions for reproducibility
166
+ }
167
+
168
+ return stuck_response, patient_system.get_questions(), patient_system.get_answers(), temp_choice_list + [stuck_response], temp_additional_info, sample_info
169
+
170
+
171
+ if __name__ == "__main__":
172
+ args = get_args()
173
+ results_logger = setup_logger('results_logger', args.log_filename)
174
+ history_logger = setup_logger('history_logger', args.history_log_filename)
175
+ detail_logger = setup_logger('detail_logger', args.detail_log_filename)
176
+ message_logger = setup_logger('message_logger', args.message_log_filename)
177
+ main()
src/patient.py ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import random
2
+ from helper import get_response
3
+
4
+ class Patient:
5
+ def __init__(self, args, sample):
6
+ # Assuming 'context' is a list or a long string of historical or background information
7
+ if isinstance(sample['context'], list) and len(sample['context']) > 0:
8
+ if 'initial_info' in sample: self.initial_info = sample['initial_info']
9
+ else: self.initial_info = sample['context'][0] # Taking the first item if it's a list
10
+ self.context_list = sample['context']
11
+ self.context_para = " ".join(sample['context'])
12
+ elif isinstance(sample['context'], str):
13
+ # Assuming sentences are separated by periods, taking the first sentence
14
+ if 'initial_info' in sample: self.initial_info = sample['initial_info']
15
+ else: self.initial_info = sample['context'].split(". ")[0]
16
+ temp = sample['context'].split(". ")
17
+ self.context_list = [temp[i]+'.' if i!=len(temp)-1 and not temp[i].endswith('.') else temp[i] for i in range(len(temp))]
18
+ self.context_para = sample['context']
19
+ else:
20
+ if 'initial_info' in sample: self.initial_info = sample['initial_info']
21
+ else: self.initial_info = "" # Default fallback
22
+ self.context_list = []
23
+ self.context_para = 'None'
24
+
25
+ self.model_name = args.patient_model
26
+ self.history = [] # To track the interaction history of questions and answers
27
+ self.facts = sample['atomic_facts'] if 'atomic_facts' in sample else None # To store atomic facts after initial processing, you can choose to store this somewhere locally to avoid repeated processing
28
+
29
+ self.max_length = 50 # Maximum length of the response (different from the expert system)
30
+ self.use_vllm = args.use_vllm
31
+ self.use_api = args.use_api # Use an API to generate responses
32
+
33
+ def update_state(self, question, answer):
34
+ # Update the internal history with the new question and the corresponding answer
35
+ self.history.append({"question": question, "answer": answer})
36
+
37
+ def get_state(self):
38
+ # Return the initial context and the history of interactions
39
+ return {
40
+ "initial_info": self.initial_info,
41
+ "interaction_history": self.history
42
+ }
43
+
44
+ def get_questions(self):
45
+ # Return the list of questions asked so far
46
+ return [qa["question"] for qa in self.history]
47
+
48
+ def get_answers(self):
49
+ # Return the list of answers provided so far
50
+ return [qa["answer"] for qa in self.history]
51
+
52
+ def get_response(self, messages, max_length=None):
53
+ if max_length is None: max_length = self.max_length
54
+ return get_response(messages, self.model_name, use_vllm=self.use_vllm, use_api=self.use_api, max_length=max_length)
55
+
56
+ def respond(self, question):
57
+ raise NotImplementedError
58
+
59
+
60
+ class RandomPatient(Patient):
61
+ def respond(self, question):
62
+ # Randomly select a response mode
63
+ if random.random() < 0.5 or len(self.context_list) == 0:
64
+ answer = "The patient cannot answer this question, please do not ask this question again."
65
+ else:
66
+ answer = random.choice(self.context_list)
67
+ self.update_state(question, answer)
68
+ return answer
69
+
70
+ class DirectPatient(Patient):
71
+ def respond(self, question):
72
+ system_prompt = "Answer the question with the given context."
73
+ user_prompt = f"Context: \"{self.initial_info}\"\nQuestion: \"{question}\"\n"
74
+ messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt}]
75
+ response, log_probs, num_tokens = self.get_response(messages)
76
+ self.update_state(question, response)
77
+ return response
78
+
79
+ class InstructPatient(Patient):
80
+ def respond(self, question):
81
+ system_prompt = "You are a truthful assistant that understands the patient's information, and you are trying to answer questions from a medical doctor about the patient."
82
+ user_prompt = f"Below is a context paragraph describing the patient and their conditions:\n\"{self.context_para}\"\nQuestion from the doctor: \"{question}\"\nUse the context paragraph to answer the doctor's question. If the paragraph does not answer the question, simply say \"The patient cannot answer this question, please do not ask this question again.\" Answer only what the question asks for. Do not provide any analysis, inference, or implications. Respond with a straightforward answer to the question ONLY and NOTHING ELSE."
83
+ messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt}]
84
+ response, log_probs, num_tokens = self.get_response(messages)
85
+ self.update_state(question, response)
86
+ return response
87
+
88
+ class FactSelectPatient(Patient):
89
+ def respond(self, question):
90
+ if not self.facts:
91
+ # Decompose context into facts if not already done
92
+ system_prompt = "You are a truthful medical assistant that understands the patient's information."
93
+ user_prompt = f"Break the following patient information into a list of independent atomic facts, with one piece of information in each statement. Each fact should only include the smallest unit of information, but should be self-contained.\n\"{self.context_para}\"\nResponse with the list of atomic facts and nothing else, prepend each fact by an index starting from 1. No sub-list allowed."
94
+ messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt}]
95
+ response_text, log_probs, num_tokens = self.get_response(messages, max_length=1000)
96
+ response_text = [s.strip() for s in response_text.splitlines()]
97
+ self.facts = response_text
98
+
99
+ facts_prompt = "\n".join(self.facts)
100
+ system_prompt = "You are a truthful medical assistant that understands the patient's information, and you are trying to answer questions from a medical doctor about the patient given a list of factual statements describing the patient. Please return the facts that answer the doctor's question verbatim without any additional information. If none of the facts answer the question, simply say \"The patient cannot answer this question, please do not ask this question again.\""
101
+ prompt = f"List of facts:\n{facts_prompt}\n\nQuestion from the doctor: \"{question}\"\n\nStatements that answer the question:"
102
+ messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": prompt}]
103
+ response, log_probs, num_tokens = self.get_response(messages)
104
+ self.update_state(question, response)
105
+ return response
src/prompts.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ expert_system = {
2
+ "meditron_system_msg_old": "You are a medical doctor answering real-world medical entrance exam questions. Based on your understanding of basic and clinical science, medical knowledge, and mechanisms underlying health, disease, patient care, and modes of therapy, answer the following multiplechoice question. Base your answer on the current and standard practices referenced in medical guidelines.\nTask: You will be asked to reason through the current patient's information and either ask an information seeking question or choose an option.",
3
+
4
+ "meditron_system_msg_original": "You are a medical doctor answering real-world medical entrance exam questions. Based on your understanding of basic and clinical science, medical knowledge, and mechanisms underlying health, disease, patient care, and modes of therapy, answer the following multiple choice question. Base your answer on the current and standard practices referenced in medical guidelines.",
5
+
6
+ "meditron_system_msg": "You are a medical doctor trying to reason through a real-life clinical case. Based on your understanding of basic and clinical science, medical knowledge, and mechanisms underlying health, disease, patient care, and modes of therapy, respond according to the task specified by the user. Base your response on the current and standard practices referenced in medical guidelines.",
7
+
8
+ "basic_system_msg": "You are an experienced doctor trying to make a medical decision about a patient.",
9
+
10
+ "empty_system_msg": "",
11
+
12
+ "only_choice": "Please answer with ONLY the correct letter choice (JUST ONE LETTER and NOTHING ELSE): A, B, C, or D.",
13
+
14
+ "system": "You are an experienced doctor trying to make a medical decision about a patient.",
15
+
16
+ "starter": """A patient comes into the clinic presenting with a symptom as described in the conversation log below:\n\nCONVERSATION LOG:\n""",
17
+
18
+ "question_word": "Doctor Question",
19
+ "answer_word": "Patient Response",
20
+
21
+ "task": "Given the information from above, your task is to choose one of four options that best answers the inquiry.",
22
+
23
+ "prompt": """\nMedical conditions are complex, so you should seek to understand their situations across many features. First, consider which medical specialty is this patient's case; then, consider a list of necessary features a doctor would need to make the right medical judgment; finally, consider whether all necessary information is given in the conversation above. How confident are you to pick the correct option to the inquiry factually using the conversation log? In the first line of your response, generate the probability as a float from 0 to 1.\n\nIf there are missing features that prevent you from picking a confident and factual answer to the inquiry, consider which features are not yet asked about in the conversation log; then, consider which missing feature is the most important to ask the patient in order to provide the most helpful information toward a correct medical decision. Ask ONE SPECIFIC ATOMIC QUESTION to address this feature. The question should be bite-sized, and NOT ask for too much at once. In the second line of your response, generate the atomic question and nothing else.\n\nHowever, if you feel like you already have enough information from the above question-answer pairs to answer the patient inquiry, use the above information to produce a factual conclusion. In this case, answer with ONLY the correct letter choice and nothing else.""",
24
+
25
+ "yes_no": "Now, are you confident to pick the correct option to the inquiry factually using the conversation log? Answer with YES or NO and NOTHING ELSE.",
26
+
27
+
28
+ "implicit": "Given the information so far, if you are confident to pick an option correctly and factually, respond with the letter choice and NOTHING ELSE. Otherwise, if you are not confident to pick an option and need more information, ask ONE SPECIFIC ATOMIC QUESTION to the patient. The question should be bite-sized, NOT ask for too much at once, and NOT repeat what has already been asked. In this case, respond with the atomic question and NOTHING ELSE.",
29
+
30
+ "implicit_RG": "Given the information so far, if you are confident to pick an option correctly and factually, respond in the format:\nREASON: a one-sentence explanation of why you are choosing a particular option.\nANSWER: the letter choice and NOTHING ELSE. Otherwise, if you are not confident to pick an option and need more information, ask ONE SPECIFIC ATOMIC QUESTION to the patient. The question should be bite-sized, NOT ask for too much at once, and NOT repeat what has already been asked. In this case, respond in the format:\nREASON: a one-sentence explanation of why you should ask the particular question.\nQUESTION: the atomic question and NOTHING ELSE.",
31
+
32
+ "binary": "Medical conditions are complex, so you should seek to understand their situations across many features. First, consider which medical specialty is this patient's case; then, consider a list of necessary features a doctor would need to make the right medical judgment; finally, consider whether all necessary information is given in the conversation above. Now, are you confident to pick the correct option to the inquiry factually using the conversation log? Answer with YES or NO and NOTHING ELSE.",
33
+
34
+ "binary_RG": "Medical conditions are complex, so you should seek to understand their situations across many features. First, consider which medical specialty is this patient's case; then, consider a list of necessary features a doctor would need to make the right medical judgment; finally, consider whether all necessary information is given in the conversation above. Up to this point, are you confident to pick the correct option to the inquiry factually using the conversation log? Answer in the following format:\nREASON: a one-sentence explanation of why you are or are not confident and what other information is needed.\nDECISION: YES or NO.",
35
+
36
+ "numcutoff": "Medical conditions are complex, so you should seek to understand their situations across many features. First, consider which medical specialty is this patient's case; then, consider a list of necessary features a doctor would need to make the right medical judgment; finally, consider whether all necessary information is given in the conversation above. What is your confidence score to pick the correct option to the inquiry factually using the conversation log? Answer with the probability as a float from 0.0 to 1.0 and NOTHING ELSE.",
37
+
38
+ "numcutoff_RG": "Medical conditions are complex, so you should seek to understand their situations across many features. First, consider which medical specialty is this patient's case; then, consider a list of necessary features a doctor would need to make the right medical judgment; finally, consider whether all necessary information is given in the conversation above. What is your confidence score to pick the correct option to the inquiry factually using the conversation log? Answer strictly in the following format:\nREASON: a one-sentence explanation of why you are or are not confident and what other information is needed.\nSCORE: your confidence score written as a float from 0.0 to 1.0.",
39
+
40
+ "numerical": "Medical conditions are complex, so you should seek to understand their situations across many features. First, consider which medical specialty is this patient's case; then, consider a list of necessary features a doctor would need to make the right medical judgment; finally, consider whether all necessary information is given in the conversation above. What is your confidence score to pick the correct option to the inquiry factually using the conversation log? Answer with the probability as a float from 0.0 to 1.0 and NOTHING ELSE.",
41
+
42
+ "numerical_RG": "Medical conditions are complex, so you should seek to understand their situations across many features. First, consider which medical specialty is this patient's case; then, consider a list of necessary features a doctor would need to make the right medical judgment; finally, consider whether all necessary information is given in the conversation above. What is your confidence score to pick the correct option to the inquiry factually using the conversation log? Answer strictly in the following format:\nREASON: a one-sentence explanation of why you are or are not confident and what other information is needed.\nSCORE: your confidence score written as a float from 0.0 to 1.0.",
43
+
44
+ "scale": """Medical conditions are complex, so you should seek to understand their situations across many features. First, consider which medical specialty is this patient's case; then, consider a list of necessary features a doctor would need to make the right medical judgment; finally, consider whether all necessary information is given in the conversation above. How confident are you to pick the correct option to the problem factually using the conversation log? Choose between the following ratings:
45
+ "Very Confident" - The correct option is supported by all evidence, and there is enough evidence to eliminate the rest of the answers, so the option can be confirmed conclusively.
46
+ "Somewhat Confident" - I have reasonably enough information to tell that the correct option is more likely than other options, more information is helpful to make a conclusive decision.
47
+ "Neither Confident or Unconfident" - There are evident supporting the correct option, but further evidence is needed to be sure which one is the correct option.
48
+ "Somewhat Unconfident" - There are evidence supporting more than one options, therefore more questions are needed to further distinguish the options.
49
+ "Very Unconfident" - There are not enough evidence supporting any of the options, the likelihood of picking the correct option at this point is near random guessing.\n\nThink carefully step by step, respond with the chosen confidence rating ONLY and NOTHING ELSE.""",
50
+
51
+ "scale_RG": """Medical conditions are complex, so you should seek to understand their situations across many features. First, consider which medical specialty is this patient's case; then, consider a list of necessary features a doctor would need to make the right medical judgment; finally, consider whether all necessary information is given in the conversation above. How confident are you to pick the correct option to the problem factually using the conversation log? Choose between the following ratings:
52
+ "Very Confident" - The correct option is supported by all evidence, and there is enough evidence to eliminate the rest of the answers, so the option can be confirmed conclusively.
53
+ "Somewhat Confident" - I have reasonably enough information to tell that the correct option is more likely than other options, more information is helpful to make a conclusive decision.
54
+ "Neither Confident or Unconfident" - There are evident supporting the correct option, but further evidence is needed to be sure which one is the correct option.
55
+ "Somewhat Unconfident" - There are evidence supporting more than one options, therefore more questions are needed to further distinguish the options.
56
+ "Very Unconfident" - There are not enough evidence supporting any of the options, the likelihood of picking the correct option at this point is near random guessing.\n\nAnswer in the following format:\nREASON: a one-sentence explanation of why you are or are not confident and what other information is needed.\nDECISION: chosen rating from the above list.""",
57
+
58
+ "yes_no": "Now, are you confident to pick the correct option to the inquiry factually using the conversation log? Answer with YES or NO and NOTHING ELSE.",
59
+
60
+ "verbal_abstain_llama": "Medical conditions are complex, so you should seek to understand their situations across many features. First, consider which medical specialty is this patient's case; then, consider a list of necessary features a doctor would need to make the right medical judgment; finally, consider whether all necessary information is given in the conversation above. Up to this point, are you confident to pick the correct option to the inquiry factually using the conversation log? Answer in the following format:\nDECISION: YES or NO.",
61
+
62
+ "implicit_abstain": "Medical conditions are complex, so you should seek to understand their situations across many features. First, consider which medical specialty is this patient's case; then, consider a list of necessary features a doctor would need to make the right medical judgment; finally, consider whether all necessary information is given in the conversation above. In the following cases, either answer the question or ask another information-seeking question:\n1. If you are confident to pick the correct option to the inquiry factually using the conversation log, answer with ONLY the correct letter choice and NOTHING ELSE.\n2. If you are not confident to pick the correct option to the inquiry factually using the conversation log, consider what are the missing information that would help you differenciate among the options. Ask ONE SPECIFIC ATOMIC QUESTION to address the missing feature. The question should be bite-sized, and NOT ask for too much at once. Make sure to NOT repeat any questions from the above conversation log. Generate the atomic question and NOTHING ELSE.",
63
+
64
+ "atomic_question": "If there are missing features that prevent you from picking a confident and factual answer to the inquiry, consider which features are not yet asked about in the conversation log; then, consider which missing feature is the most important to ask the patient in order to provide the most helpful information toward a correct medical decision. Ask ONE SPECIFIC ATOMIC QUESTION to address this feature. The question should be bite-sized, and NOT ask for too much at once. Generate the atomic question and NOTHING ELSE.",
65
+
66
+ "atomic_question_improved": "If there are missing features that prevent you from picking a confident and factual answer to the inquiry, consider which features are not yet asked about in the conversation log; then, consider which missing feature is the most important to ask the patient in order to provide the most helpful information toward a correct medical decision. You can ask about any relevant information about the patient’s case, such as family history, tests and exams results, treatments already done, etc. Consider what are the common questions asked in the specific subject relating to the patient’s known symptoms, and what the best and most intuitive doctor would ask. Ask ONE SPECIFIC ATOMIC QUESTION to address this feature. The question should be bite-sized, and NOT ask for too much at once. Make sure to NOT repeat any questions from the above conversation log. Answer in the following format:\nATOMIC QUESTION: the atomic question and NOTHING ELSE.\nATOMIC QUESTION: ",
67
+
68
+ "answer": "Assume that you already have enough information from the above question-answer pairs to answer the patient inquiry, use the above information to produce a factual conclusion. Respond with the correct letter choice (A, B, C, or D) and NOTHING ELSE.\nLETTER CHOICE: ",
69
+
70
+ "non_interactive": {
71
+ "starter": "A patient comes into the clinic presenting with a symptom as described in the statements below:",
72
+ "question_prompt": "Given the information from above, your task is to choose one of four options that best answers the following question: ",
73
+ "response": "To the best of your ability, answer with ONLY the correct letter choice and nothing else."
74
+ },
75
+
76
+ "curr_template": """A patient comes into the clinic presenting with a symptom as described in the conversation log below:
77
+
78
+ PATIENT INFORMATION: {}
79
+ CONVERSATION LOG:
80
+ {}
81
+ QUESTION: {}
82
+ OPTIONS: {}
83
+ YOUR TASK: {}"""
84
+
85
+ }
86
+
87
+ patient_system = {
88
+ "system": "You are a truthful assistant that understands the patient's information, and you are trying to answer questions from a medical doctor about the patient. ",
89
+ "header": "Below is a list of factual statements about the patient:\n",
90
+ "prompt": 'Which of the above atomic factual statements answers the question? If no statement answers the question, simply say "The patient cannot answer this question, please do not ask this question again." Answer only what the question asks for. Do not provide any analysis, inference, or implications. Respond by selecting all statements that answer the question from above ONLY and NOTHING ELSE.',
91
+
92
+ "prompt_new": """Below is a list of factual statements about the patient:\n
93
+ {}\n
94
+ Which of the above atomic factual statements answers the question? If no statement answers the question, simply say "The patient cannot answer this question, please do not ask this question again." Answer only what the question asks for. Do not provide any analysis, inference, or implications. Respond with all statements that directly answer the question from above verbatim ONLY and NOTHING ELSE, with one statement on each line.
95
+
96
+ Example:
97
+ Question from the doctor: [some question]
98
+ STATEMENTS:\n[example statement: she reports that...]\n[example statement: she has a history of...]
99
+
100
+ Question from the doctor: {}
101
+ """,
102
+
103
+ "system_first_person": "You are a patient with a list of symptoms, and you task is to truthfully answer questions from a medical doctor. ",
104
+ "header_first_person": "Below is a list of atomic facts about you, use ONLY the information in this list and answer the doctor's question.",
105
+ "prompt_first_person": """Which of the above atomic factual statements are the best answer to the question? Select at most two statements. If no statement answers the question, simply say "The patient cannot answer this question, please do not ask this question again." Do not provide any analysis, inference, or implications. Respond by reciting the matching statements, then convert the selected statements into first person perspective as if you are the patient but keep the same information. Generate your answer in this format:
106
+
107
+ STATEMENTS:
108
+ FIRST PERSON: """
109
+ }
110
+
111
+ conformal_scores = {
112
+ "prompt_score": "Given the information from above, your task is to assign a likelihood score to each option. Respond with the probability as a float from 0 to 1 and NOTHING ELSE. Respond in the following format:\nA: 0.0\nB: 0.0\nC: 0.0\nD: 0.0",
113
+ }