This study investigates how to fine-tune the large language model GPTSW3 for a specific use case scenario. By using different training methods and techniques, the model has been adapted and evaluated based on its ability to generate correct answers. The research has identified the main challenges in the training process, which includes data quality, preprocessing and finding the optimal parameter settings. The study has also examined if the model’s ability to generate accurate answers is dependent on the size of the training data. The results showed that longer training periods, combined by supervised and unsupervised training, and optimization of the parameters are critical for improving the model’s ability to generate accurate answers. Future work should focus on increasing the datasets diversity and use a larger model to further improve the models ability to generate accurate answers.