LLMs4OL

** LLMs4OL Paradigm Task A: Term Typing Task B: Type Taxonomy Discovery Task C: Type Non-Taxonomic Relation Extraction Finetuning Task A Detailed Results Task B Detailed Results Task C Detailed Results Task A Datasets Task B Datasets Task C Datasets Finetuning Datasets **

Task A. Term Typing

Zero-Shot Testing

To run zero-shot testing you can try the following command line after you are done with installing requirements:

ptyhon3 test.py [-h] --kb_name KB_NAME --model_name MODEL_NAME --template TEMPLATE --device DEVICE

Where KB_NAME, MODEL_NAME, TEMPLATE, and DEVICE accept the following values:

KB_NAME:

wn18rr, geonames, nci, snomedct_us, medcin

MODEL_NAME:

bert_large, flan_t5_large, flan_t5_xl, bart_large, bloom_1b7, bloom_3b, llama_7b, gpt3, chatgpt, gpt4

TEMPLATE: All the templates based on the chosen dataset can be accessed in this table.

template-1, template-2, template-3, template-4, template-5, template-6, template-7, template-8

DEVICE:

cpu, cuda

As an example run if you want to run your model on the wn18rr dataset with the bert_large model on template-1 and I have GPU resource, the command line would be:

python3 test.py --kb_name="wn18rr" --model_name="bert_large" --template="template-1" --device="cuda"

Or you can easily run the test_manual.sh script:

./test_manual.sh

and It will ask you for the dataset and model name then it will run the model on all 8 prompt templates and then will save the results in the results directory. Since the number of runs will be very large, We have created test_auto.sh to run all the possible combinations with datasets, templates, and models.

./test_auto.sh