TIMES OF TECH

Prompt Engineering for AI-Assisted Programming

Prompt Engineering for AI-Assisted Programming

So far in 2024, about 62% of developers are using AI tools, up from 44% in 2023, according to an extensive report from StackOverflow.  It includes the feedback from over 65,000 coders from 185 countries.

The adoption of AI-assisted programming has definitely been swift.  After all, many of the tools have only been around for a couple of years.  Then again, they have shown significant improvements in productivity and developer satisfaction.

The bottom line:  Software development is undergoing a revolution.

Yet software development tools have issues.  The underlying LLMs (large language models) can hallucinate, resulting in buggy code.  The models are also pretrained, which means they are not up-to-date with the latest changes in languages, frameworks, and libraries.

Another problem is with creating effective prompts.  This can be more of an art than science – which can certainly be confounding for developers.

However there are some factors to keep in mind to get better code responses from AI-assisted programming tools.

Let’s take a look:

#1 – Setting the Persona or Role

Establishing a role or persona for an AI-assisted programming tool is like giving the AI a specific job title and set of skills. It’s important because it helps the AI understand how to approach your programming questions or tasks.

These are some sample prompts:

You are an experienced database administrator specializing in PostgreSQL. Offer advice on database design, optimization, and maintenance best practices.

You are a Rust language expert focusing on systems programming and performance-critical applications. Offer advice on leveraging Rust’s ownership system, trait-based generics, and zero-cost abstractions.

By setting the persona or role, you nudge the LLM to go beyond providing generic responses.

#2 – Providing Clear Instructions

When crafting prompts for AI code generation, there should be clarity and focus on a single, well-defined task.  You reduce ambiguity and lower the risk of misinterpretation by the AI. This focused approach not only leads to more accurate code generation but also simplifies debugging and modifications. It encourages the creation of modular, reusable functions, each with a distinct purpose.

Here’s a prompt:

Write a Python function that reads a CSV file named data.csv and returns the data from the second column as a list of numbers. Handle file-not-found errors and raise an appropriate exception. Follow PEP 8 style guidelines and include a docstring.

This should produce a useful function.  You can then build on this:

Write a Python function that takes a list of numbers as input and returns their average as a float. Handle the case of an empty list appropriately. Include error checking for invalid data types. Include a docstring.

Then we can have this:

Write a main Python function that uses the CSV reading function and the average calculation function to read data.csv, calculate the average of the second column, and return this average. Handle any exceptions raised by the other functions.

Each prompt targets a specific functionality, making the resulting code more modular and easier to test and maintain.

#3 – Use Context and Examples

Providing context and examples in your prompts can greatly improve the quality of the generated code. When possible, include sample input/output pairs, edge cases, or snippets of existing code that the AI-generated code needs to work with. This additional information helps the AI understand the specific requirements and constraints of your project.

Here’s a prompt:

Create a JavaScript function to validate email addresses. The function should return true for valid emails and false for invalid ones. Here are some test cases to consider:

test@example.com (valid)

invalid.email@com (invalid)

user@subdomain.example.co.uk (valid)

@missingusername.com (invalid)

Ensure the function handles these cases correctly.

So providing context and examples in prompts enhances AI-generated code by clarifying requirements.   This approach also helps identify potential edge cases and ensures the generated solution is tailored to your specific needs.

Conclusion

Creating effective prompts for AI-assisted software development doesn’t need to be overly complex or involve numerous steps. Keeping just a few key principles in mind can significantly improve the quality of the code generated. These straightforward techniques help clarify requirements, reduce ambiguity, and guide the AI towards producing more accurate and useful results.

Also, for my upcoming workshop at ODSC WestI will show how to use open-source tools for AI-assisted programming.  Some of the highlights include:

  • Code completion, chat functions, and effective prompting techniques.
  • Using AI for regex, refactoring, debugging, and unit testing.
  • Applying models like Llama 3 for documentation, requirements, and README creation
  • Balancing the benefits and limitations of AI in real-world development scenarios.

Source link

For more info visit at Times Of Tech

Share this post on

Facebook
Twitter
LinkedIn

Leave a Comment