Is the Larger the Better? An Exploratory Study into Human-Large Language Model Collaboration
| dc.contributor.author | Tao, Jie | |
| dc.contributor.author | Fang, Xing | |
| dc.contributor.author | Zhou, Lina | |
| dc.date.accessioned | 2024-12-26T21:04:59Z | |
| dc.date.available | 2024-12-26T21:04:59Z | |
| dc.date.issued | 2025-01-07 | |
| dc.description.abstract | Large language models (LLMs) have garnered considerable attention in both academics and industry. Given an array of LLMs available, one of the primary challenges lies in their selection and adaptation strategies. Although LLMs are generally large, they still vary significantly in size and larger LLMs consume significantly more computing resources. This prompts the inquiry into whether larger models perform better. In addition, there is a widespread recognition of the power of LLMs in performing open-ended or generative tasks. However, how to use LLMs to address a closed-ended problem remains under explored. The exploration of human-LLM collaboration on close-ended problem has been even more sparse. This research aims to address the above limitations by comparing different types of state-of-the-art adaptation strategies for LLMs, including in-context learning and fine-tuning. Moreover, it employs multi-class multi-label classification - a close-ended problem to empirically evaluate those adaptation strategies. The research findings provide valuable insights and recommendations for human users considering deploying LLMs for close-ended problems. | |
| dc.format.extent | 10 | |
| dc.identifier.doi | https://doi.org/10.24251/HICSS.2025.083 | |
| dc.identifier.isbn | 978-0-9981331-8-8 | |
| dc.identifier.other | d92bf6ca-9930-4c87-9dc8-8299fcbba5bf | |
| dc.identifier.uri | https://hdl.handle.net/10125/108921 | |
| dc.relation.ispartof | Proceedings of the 58th Hawaii International Conference on System Sciences | |
| dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | |
| dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | |
| dc.subject | Technological Advancements in Digital Collaboration with Generative AI and Large Language Models | |
| dc.subject | adaptation strategies, aspect based sentiment analysis, generative ai, large language models | |
| dc.title | Is the Larger the Better? An Exploratory Study into Human-Large Language Model Collaboration | |
| dc.type | Conference Paper | |
| dc.type.dcmi | Text | |
| prism.startingpage | 694 |
Files
Original bundle
1 - 1 of 1
