1. Background

The team was short of testers and had no choice but to do it ourselves. Automating tests for the systems we develop saves manpower, improves efficiency and increases confidence in the quality assurance of the system.

Our goal is to have automated tests covering three environments, as follows:

  • Automated testing in the CI/CD pipeline.
  • Post-release automated smoke/acceptance testing in various stage environments.
  • Post-release automated smoke/acceptance testing in production environments.

We will create a unified use case library or different use case libraries for different environments, but none of this matters, what matters is the language we use to write these use cases and the tools we use to drive them.

Here’s a look at how the solution came about.

2. Solution

Initially colleagues in the group used YAML files to describe the test cases and wrote a separate tool in Go to read them and execute them. This tool worked fine. However, there were some problems with this solution.

  • Writing complexity

    To write the simplest use case of a successful connect connection, we have to configure almost 80 lines of yaml. a slightly more complex test scenario would be around 150 lines of configuration.

  • Difficult to extend

    The initial YAML structure was inadequately designed and lacked extensibility, making it necessary to create a new use case file when extending a use case.

  • Inadequate expression

    Our system is a messaging gateway and some use cases will rely on certain timings, but use cases written in YAML based on this cannot express such use cases clearly.

  • Poor maintainability

    he tool that drives the execution of the use case to see the execution logic. It is difficult to cover this tool quickly.

To do this we wanted to redesign a tool where test developers could write use cases using an external DSL grammar supported by the tool, which would then read these use cases and execute them.

Note: According to the classification of DSLs in Martin Fowler’s book Domain Specific Languages, there are three options for DSLs: generic profiles (xml, json, yaml, toml), custom domain languages, which together are called external DSLs. e.g. regular expressions, awk, sql, xml, etc. Using a fragment/subset of a common programming language as a DSL is called an internal dsl, like ruby etc.

After a rough evaluation of the DSL grammar based on the number of scenarios to be tested and the complexity of the use cases (I even generated a few versions of the DSL grammar with the help of ChatGPT), I found that this “small language” is “small but complete”. If one were to write a use case in such a DSL, it would probably be comparable in code size to one written in a general-purpose language (e.g. Python).

If this is the case, there is no point in designing your own external DSL. It would be better to use Python for the whole thing. But on second thought, since we are using a subset of a common language and the team members are not familiar with Python, why not go back to Go.

Let’s make a bold move: use the Go testing framework as an “internal DSL” to write use cases and use the go test command as a test driver to execute them. Also, with GPT-4, it shouldn’t be a big problem to generate TestXxx, additional use cases and so on.

Let’s take a look at how to organise and write use cases and use the go test driver for automated testing.

3. Implementation

3.1. Test case organization

Here is an example of an automated test of an open source mqtt broker.

Note: You can build a standalone version of the open source mqtt broker service locally as the object being tested, for example using Eclipse’s mosquitto.

Before organising the use cases, I asked ChatGPT what aspects of an mqtt broker test should be included in the use cases, and ChatGPT gave me a simple table:

chatGPT-4

If you know anything about the MQTT protocol, then you should find the answers given by ChatGPT to be very good.

Here we will organise the use cases in three scenarios: connection, subscribe and publish:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
$tree -F .
.
├── Makefile
├── go.mod
├── go.sum
├── scenarios/
│   ├── connection/              // connection
│   │   ├── connect_test.go      // test suites
│   │   └── scenario_test.go
│   ├── publish/                 // publish
│   │   ├── publish_test.go      // test suites
│   │   └── scenario_test.go
│   ├── scenarios.go             // Some of the public functions required for testing in scenarios
│   └── subscribe/               // subscribe
│       ├── scenario_test.go
│       └── subscribe_test.go    // test suites
└── test_report.html             // Default test reports generated

A brief description of the layout of this test case organisation:

  • We divide the test cases into multiple scenarios, here including connection, subscribe and publish;
  • Since it is driven by go test, each directory where the test source files are stored follows Go’s requirements for Test, e.g. source files end with _test.go, etc.
  • Each scenario directory holds the test case files, and a scenario can have multiple _test.go files. Here each TestXxx in the _test.go file is set to be a test suite, and the TestXxx in turn is based on a subtest to write a use case, where each subtest case is a minimal test case;
  • The scenario_test.go in each scenario directory is the TestMain entry for the packages in this directory, mainly to consider passing in uniform command line flags and parameter values for all packages, and also you set up and teardown in TestMain for that scenario setting. typical code for this file is as follows:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// github.com/bigwhite/experiments/automated-testing/scenarios/subscribe/scenario_test.go

package subscribe

import (
    "flag"
    "log"
    "os"
    "testing"

    mqtt "github.com/eclipse/paho.mqtt.golang"
)

var addr string

func init() {
    flag.StringVar(&addr, "addr", "", "the broker address(ip:port)")
}

func TestMain(m *testing.M) {
    flag.Parse()

    // setup for this scenario
    mqtt.ERROR = log.New(os.Stdout, "[ERROR] ", 0)

    // run this scenario test
    r := m.Run()

    // teardown for this scenario
    // tbd if teardown is needed

    os.Exit(r)
}

Let’s move on to the implementation of the specific test case.

3.2. Test case implementation

Let’s take a slightly more complex test case for the subscribe scenario and look at the test suites and cases in subscribe_test.go in the subscribe directory.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
// github.com/bigwhite/experiments/automated-testing/scenarios/subscribe/subscribe_test.go

package subscribe

import (
    scenarios "bigwhite/autotester/scenarios"
    "testing"
)

func Test_Subscribe_S0001_SubscribeOK(t *testing.T) {
    t.Parallel() // indicate the case can be ran in parallel mode

    tests := []struct {
        name  string
        topic string
        qos   byte
    }{
        {
            name:  "Case_001: Subscribe with QoS 0",
            topic: "a/b/c",
            qos:   0,
        },
        {
            name:  "Case_002: Subscribe with QoS 1",
            topic: "a/b/c",
            qos:   1,
        },
        {
            name:  "Case_003: Subscribe with QoS 2",
            topic: "a/b/c",
            qos:   2,
        },
    }

    for _, tt := range tests {
        tt := tt
        t.Run(tt.name, func(t *testing.T) {
            t.Parallel() // indicate the case can be ran in parallel mode
            client, testCaseTeardown, err := scenarios.TestCaseSetup(addr, nil)
            if err != nil {
                t.Errorf("want ok, got %v", err)
                return
            }
            defer testCaseTeardown()

            token := client.Subscribe(tt.topic, tt.qos, nil)
            token.Wait()

            // Check if subscription was successful
            if token.Error() != nil {
                t.Errorf("want ok, got %v", token.Error())
            }

            token = client.Unsubscribe(tt.topic)
            token.Wait()
            if token.Error() != nil {
                t.Errorf("want ok, got %v", token.Error())
            }
        })
    }
}

func Test_Subscribe_S0002_SubscribeFail(t *testing.T) {
}

The test cases in this test file are not that different from the unit tests we write every day! However, there are a few things to note.

  • Test function naming

Here, two Test suites are named Test_Subscribe_S0001_SubscribeOK and Test_Subscribe_S0002_SubscribeFail. The naming format is

1
Test_scene_suite_number_test_content_abbreviation

The reason for this naming is the need to organise the test cases and also to distinguish between the different cases in the generated Test report.

  • testcase is presented via subtest

    Each TestXxx is a test suite, while each table-driven sub test corresponds to a test case.

  • Both test suite and test case can be individually marked as parallelizable or not

    The Parallel method of testing.T identifies whether a particular TestXxx or test case (subtest) is parallelizable or not.

  • For each test case, we call setup and teardown

    This ensures that the test cases are independent of each other and do not affect each other.

3.3. Test execution and report generation

After designing the layout and writing the use cases, the next step is to execute the use cases. So how do we execute these use cases?

As mentioned earlier, our solution is based on the go test driver, and our execution will also use go test.

Under the top-level directory automated-testing, execute the following command:

1
$go test ./... -addr localhost:30083

go test will iterate through the tests for each packet under automated-testing, passing in the -addr flag when executing the test for each packet. If there is no mqtt broker service listening on port localhost:30083, then the above command will output the following message:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$go test ./... -addr localhost:30083
?       bigwhite/autotester/scenarios   [no test files]
[ERROR] [client]   dial tcp [::1]:30083: connect: connection refused
[ERROR] [client]   Failed to connect to a broker
--- FAIL: Test_Connection_S0001_ConnectOKWithoutAuth (0.00s)
    connect_test.go:20: want ok, got network Error : dial tcp [::1]:30083: connect: connection refused
FAIL
FAIL    bigwhite/autotester/scenarios/connection    0.015s
[ERROR] [client]   dial tcp [::1]:30083: connect: connection refused
[ERROR] [client]   Failed to connect to a broker
--- FAIL: Test_Publish_S0001_PublishOK (0.00s)
    publish_test.go:11: want ok, got network Error : dial tcp [::1]:30083: connect: connection refused
FAIL
FAIL    bigwhite/autotester/scenarios/publish   0.016s
[ERROR] [client]   dial tcp [::1]:30083: connect: connection refused
[ERROR] [client]   dial tcp [::1]:30083: connect: connection refused
[ERROR] [client]   Failed to connect to a broker
[ERROR] [client]   Failed to connect to a broker
[ERROR] [client]   dial tcp [::1]:30083: connect: connection refused
[ERROR] [client]   Failed to connect to a broker
--- FAIL: Test_Subscribe_S0001_SubscribeOK (0.00s)
    --- FAIL: Test_Subscribe_S0001_SubscribeOK/Case_002:_Subscribe_with_QoS_1 (0.00s)
        subscribe_test.go:39: want ok, got network Error : dial tcp [::1]:30083: connect: connection refused
    --- FAIL: Test_Subscribe_S0001_SubscribeOK/Case_003:_Subscribe_with_QoS_2 (0.00s)
        subscribe_test.go:39: want ok, got network Error : dial tcp [::1]:30083: connect: connection refused
    --- FAIL: Test_Subscribe_S0001_SubscribeOK/Case_001:_Subscribe_with_QoS_0 (0.00s)
        subscribe_test.go:39: want ok, got network Error : dial tcp [::1]:30083: connect: connection refused
FAIL
FAIL    bigwhite/autotester/scenarios/subscribe 0.016s
FAIL

This is also a test failure situation.

When automating tests, we usually save the error or success information to a test report file (mostly html), so how do we generate our test report file based on the above test result content?

Firstly, go test supports the presentation of the output in a structured form by passing in the -json flag, so that we can simply read out the fields and write them to html based on the json output. The good thing is that there is a ready-made open source tool to do this: go-test-report. Here is the command line pipeline that allows go test and go-test-report to work together to generate test reports:

Note: Installation of the go-test-report tool: go install github.com/vakenbolt/go-test-report@latest

1
2
$go test ./... -addr localhost:30083 -json|go-test-report
[go-test-report] finished in 1.375540542s

Once executed, a test_report.html file will be generated in the current directory, which can be opened with a browser to see the results of the test execution:

test_report.html

With the output of the test report, we can clearly see which cases passed and which ones failed. And by using the name of the Test suite or the name of the Test case, we can quickly locate which case of which suite under which scenario is reporting the error! We can also click on the name of a test suite, e.g. Test_Connection_S0001_ConnectOKWithoutAuth, and open the error details to see the source file and the specific line number of the error.

test_report.html

To make it easier to quickly type in the above commands, we can put them into a Makefile for easy input and execution, i.e. in the top-level directory, execute make to execute the tests.

1
2
3
$make
go test ./... -addr localhost:30083 -json|go-test-report
[go-test-report] finished in 2.011443636s

To pass in the service address of a custom mqtt broker, use: $make broker_addr=192.168.10.10:10083.

4. Summary

In this article, we have described how to implement go test driven automated testing based on go test, describing the structural layout of such a test, how to write the use cases, execution and report generation, etc.

The shortcoming of this solution is that it requires go and go-test-report to be deployed in the environment where the test cases are located.

go test supports the compilation of tests into a single executable, although it does not support the compilation of tests from multiple packages into a single executable:

1
2
$go test -c ./...
cannot use -c flag with multiple packages

In addition, the go test executable does not support converting the output to JSON format. Therefore, it is not possible to interface with go-test-report to save the test results in a file for subsequent viewing.

The source code covered in this article can be downloaded here.

5. Ref

  • https://tonybai.com/2023/03/30/automated-testing-driven-by-go-test/