Test-Driven Design Pattern


Test-Driven Design (TDD) is a software development principal encouraging tests to be written before functional code. The practice ensures that code meets defined test requirements and passes all tests before being refactored or expanded. This document explores the implementation of the principal resulting design pattern. It also impacts prompting order qualifying it to have an associated prompting template (or pattern).

Let’s explore the topic using our random password generator that relies on a configuration file to render its results.

The random password generator reads single parameter configuration file that defines the length of the password it creates.

{
  "password_length": 10
}

Step 1

Using Golang, we will start by writing a unit test that ensures the configuration loads correctly and handles invalid cases (e.g., missing fields or malformed JSON).

package main

import (
	"encoding/json"
	"os"
	"testing"
)

// Config structure to load from config.json
type Config struct {
	PasswordLength int `json:"password_length"`
}

// Test loading valid config file
func TestLoadConfigValid(t *testing.T) {
	config, err := loadConfig("config_valid.json")

	// Ensure no error occurred when loading a valid config
	if err != nil {
		t.Fatalf("Expected no error, got %v", err)
	}

	// Check if password length is correctly loaded
	if config.PasswordLength != 10 {
		t.Errorf("Expected password_length to be 10, got %d", config.PasswordLength)
	}
}

// Test loading invalid config file
func TestLoadConfigInvalid(t *testing.T) {
	_, err := loadConfig("config_invalid.json")

	// Ensure error is thrown for invalid config file
	if err == nil {
		t.Fatalf("Expected error for invalid config, got none")
	}
}

// Test loading config file with missing fields
func TestLoadConfigMissingFields(t *testing.T) {
	_, err := loadConfig("config_missing_fields.json")

	// Ensure error is thrown for missing fields
	if err == nil {
		t.Fatalf("Expected error for missing fields, got none")
	}
}

Step 2

Implement the functionality to load the configuration file and incorporate the test cases.

package main

import (
	"encoding/json"
	"fmt"
	"os"
)

// Config struct to map the config.json file
type Config struct {
	PasswordLength int `json:"password_length"` // This is the required field
}

// loadConfig reads the configuration file and returns a Config struct
func loadConfig(filename string) (*Config, error) {
	file, err := os.Open(filename)
	if err != nil {
		return nil, fmt.Errorf("could not open config file: %v", err)
	}
	defer file.Close()

	var config Config
	decoder := json.NewDecoder(file)
	err = decoder.Decode(&config)
	if err != nil {
		return nil, fmt.Errorf("error decoding config file: %v", err)
	}

	// Check for missing or invalid password_length field
	if config.PasswordLength == 0 {
		return nil, fmt.Errorf("missing or invalid 'password_length' in config")
	}

	// Validate the loaded config values
	if config.PasswordLength <= 0 {
		return nil, fmt.Errorf("'password_length' must be greater than 0, got %d", config.PasswordLength)
	}

	return &config, nil
}

func main() {
	config, err := loadConfig("config.json")
	if err != nil {
		fmt.Println("Error loading config:", err)
		return
	}

	fmt.Println("Config loaded successfully:", config)
}

Step 3

After refactoring, consider additional edge test cases. At this point you should consider the code ready for peer review.

Step 4

Manually create or have AI create use cases that should fail and be caught by implemented unit tests. Here are a few examples of a configuration file that would fail these tests:

// Missing field
{
  "other_field": 5
}

// Malformed Json
{
  "password_length": 10,
  
// Wrong data type
{
  "password_length": "ten"
}

Prompting Considerations

Greenfield Code

For new “greenfield code” consider the following in your approach to working with an LLM.

If you prompt an LLM like Chat GPT or Code Llama to write a password generator that gets its password length from a configuration file in Json format, it will build that program with little consideration for testing. As a developer, you need to shift your thinking to test-driven design through your prompting. A few considerations include:

  • Begin prompting with an overview of your functionality and ask for the first rendering of code to be test-driven with comments left for where functionality should be inserted.
  • Give the LLM a list of quality concerns you have and ask it to expand upon them.
  • Ask the LLM to generate a list of inputs or examples of failing test cases. In the example above, generate those failing config.json files.
  • Instruct the LLM to write the test-driven code
  • Once satisfied with the test-driven approach, have it expand the code by implementing the core functionality and review it manually. This is a great opportunity for a peer review.
  • Refactor or revisit with the LLM with any concerns you have including code organization. Now is the time to consider breaking that configuration loading function into a /common/config.go (or applicable pattern the programming language of your choice).

Existing Code

Whether working will well written code or that dreaded “monolith” you inherited, consider the following in your prompting as you work with new code.

  • Start by having the LLM analyze the code and add comments.
  • Specifically ask for the LLM to opine on the error trapping and recommend what would be required to improve upon it. Tell the LLM to use a test-driven mindset.
  • Manually review the code and the LLM’s output thusfar.
  • Begin a two-pronged approach to improving the code
  • Approach 1
    • Ask the LLM to refactor the code based on test-driven design principals
    • Review the output and consider compiling and testing
  • Approach 2
    • Instruct the LLM on how you want the code refactored (monolith busting)
    • Get the code working and set it aside
    • Define the core components of the new code and start fresh with a new test-driven design based on the “greenfield” prompting approach
    • Give the LLM specific portions of the refactored code you set aside to integrate into the test-driven design code
    • Get it to compile
    • Manually and peer review
    • Compile and test

The above is simply guidance, and your actual process may vary. Still, using a test-driven design approach to code prompting can greatly improve both greenfield and existing code.