This repository contains resources, code, and documentation related to the applications and comparisons of large language models (LLMs) with a parameter size of 13 billion or below. The primary focus is on fine-tuning these models for various tasks and demonstrating their capabilities in natural language processing.
We have included detailed results and evaluation metrics for all our experiments in this section. You can compare your results with ours to gauge the performance of your fine-t