Properties of LLMs, weak points and improvement measures for the domain adaptation of applications

The transfer of the AI paradigm “Foundation Models” in the language domain leads to Large Language Models (LLMs), which can be used to communicate in natural language and can be used in a variety of ways for different tasks due to their “broad training”. However, this requires adaptations of the models for the specific application domains. In this second part of his blog series, Wilhelm Niehoff presents the three method areas of In Context Learning (ICL), prompt engineering and fine-tuning that are used for this purpose. By addressing and using the LLMs, however, design-related weaknesses such as hallucinations, lack of topicality and expertise in detailed topics occur. In addition to the three method areas, there are “up-to-date” approaches such as DSPy and TextGrad, which aim to relieve the user of the task of constructing prompts. Accordingly, the weaknesses are eliminated by adding further components that are coordinated by LLMs.

Continue reading »