
Stefanos Peros
Software engineer
September 24, 2024
Since the introduction of ChatGPT's API, we have embarked on a journey to incorporate Large Language Models (LLMs) into our products in innovative ways, aiming to enhance user experience and create value for our customers. Just as programming languages enable developers to command their computers, prompts enable users to guide LLMs in accomplishing specific tasks. However, the parallel doesn't stop there; just like poorly written code results in flawed programs, ineffective prompts can lead to inaccurate responses from AI models.
In our experience, one of the most common reasons for errors has been with parsing the model's response. This is why we shifted early on to using third-party frameworks, particularly LangChain, as it maps the LLM's response to dictionaries and objects, shielding the developer from the underlying prompt instructions to structure the response.
For many of our products, we use LangChain's built-in PromptTemplate class to construct our prompts. Generally, prompt templates contain placeholders, which are populated at runtime by the corresponding variables declared in the code.
Just as in coding, where functions should be concise and well-structured, concise and specific prompts lead to more accurate and relevant AI responses.
GPTs are exactly that: pre-trained models on an extensive amount of data. When using GPTs for specialised use cases, it is likely that their pre-training is too broad in that domain which can lead to inaccurate responses.
Writing code typically presents a steeper learning curve than crafting prompts. However, many of the fundamental principles that underpin effective coding also apply to prompt creation, making all the difference with respect to the usability of AI model responses.