CUE for cross-language test cases

Dozens of backends written in Java & Go support a unified configuration system enabling users to declaratively configure every aspect of their Yext account. They can pull their configuration as a set of JSON files, and they can apply JSON files to make changes to their account.

CaC (“Configuration-as-Code”, our boring name for this system) has only two verbs (pull and apply), but there are a bunch of sophisticated behaviors that need to be implemented consistently across services, such as “dry run” and “comprehensive” modes, or various types of filtering.

Testing all of those behaviors was time consuming – we found our test suites as large as the code under test! And it had to be done for each service – although the interface was the same, the resource type (data format, validation rules) varies between services. And even if we did find commonalities to factor out, we would still have to maintain them for both Java and Go. What to do?

The open-source data validation language CUE provided a way to concisely define our test cases and test data in a way that compiles down to a simple (large) JSON file. Using a single suite of CUE test definitions, we only needed a small test driver in Java and Go to get immediate and full coverage of existing and future services! Now, any new services participating in CaC get a high baseline test coverage for free. We have the confidence that common behaviors are implemented consistently across a huge surface area.

That was a whirlwind overview, so let’s take a deeper look at each part.


Configuration-as-Code (CaC) is a framework built at Yext, using RPC and Protocol Buffers, to configure the resources of numerous services written in multiple languages. The project’s initial goal was to allow bits of configuration to be defined once and then reused across accounts – both by Yext employees and customers themselves. For instance, a solution template for one account, which includes a set of custom entity types and custom fields, can be pulled and applied to other accounts using config-as-code. Then, the project evolved to handle any resources for various services.


The CaC framework consists of a configuration interface, which can either be a Command Line Interface (CLI) or a cloud IDE known as the Admin Console, a ConfigGateway, and the ConfigServices. The ConfigGateway is a web server that connects the interface with the ConfigServices. It receives requests from the interface; it transforms them into the appropriate request-per-service and sends them to the proper ConfigService. It then retrieves the responses from the ConfigServices and synthesizes them into a single reply back to the interfaces. Finally, ConfigServices are a group of services that manages the resource configurations of different products.


ConfigServices are the backends written in Java and Go that are responsible for keeping track of the resource’s metadata while exposing endpoints to pull and apply their resources. These configuration services need to handle nontrivial functionalities like namespaces, dry-run, comprehensive on apply, and request by id, resource type, namespace at pull.

Each product area has its team that manages the product’s configuration service. Having scattered ownership of the ConfigServices presented the problem of duplicating work for testing complex functionalities in these services. Adding the service’s nontrivial features, with the lack of individual ownership of the test process, caused the challenge of efficiently creating complete test coverage for the ConfigServices.

These circumstances set the stage for looking for a solution to a unified test suite that would meet our testing principles. A testing suite that can be used by all the configuration services had to meet the following requirements:

  • It had to enable us to define configuration resources in any language.
  • It needed to be able to easily add new services, allowing full coverage of resource agnostic functionalities in the tested ConfigServices.
  • It had to enable the services to add further testing for a product-specific scenario.
  • Finally, the suite needed to integrate into the building and testing process of our Continuous Delivery pipeline.

With these terms in mind, we considered several options for a unified testing suite. The one idea that stood out over the other ones was using the data validation language CUE for designing a list of test cases containing the input and the expected output of the operations invoking the configuration services.


According to its website, CUE is an open-source language with a rich set of APIs and tooling to define, generate, and validate all kinds of data. Between its most common use cases are: Managing text-based files to configure a system, validating text-based or programmatic data, defining schema to communicate an API, and converting CUE constraints to and from definitions in other languages. Here at Yext, we decided to seize CUE’s characteristics for a new purpose, developing a language-agnostic test suite (If you want to learn more about CUE language, here is a helpful introduction video).

CUE test suite

The test suite consists of a set of independent test cases defined in CUE, where each test case is a list of sequential steps covering one shared service functionality, or a combination of them. In the following image, you can observe the definition of the test case:

// Defines a single test case as sequence of operations that performs
// 'apply/pull' resources from the config server.
// The cases are designed to be independent from each other. They don't share
// any initial setup and don't depend or modify the data from a different case.
#TestCase: {
    // The name of the test case.
    name: string

    // A more complete description of the test case.
    description: string

    // A sequence of steps that comprise the case. The steps are executed in
    // numerical order
    sequence: [=~"\\$\\d+"]: #TestStep

Each step performs one apply and pull operation while comparing its responses with the test suite’s expected results. After determining each test case’s structure, we created the CUE definition of each element involved, like the test steps, the apply, and pull requests and responses, shown in the next image:

// A discrete step in a test case.
// Each step is processed by:
//  - Apply resources by performing an 'apply' request.
//  - Compare the result from the request with the expected response.
//  - Pull resources by performing a 'pull' request.
//  - Compare the result with the expected resources from the operation.
#TestStep: {
    // An optional description of the step.
    description?: string

    // The RPC request to apply resources to a business.
    applyRequest?: #Apply.#Request

    // The result of an RPC call to apply resources to a business.
    applyResponse?: #Apply.#Response

    // The RPC request to pull resources for a business.
    pullRequest?: #Pull.#Request

    // The result of an RPC to pull resources for a business.
    pullResponse?: #Pull.#Response

and the resources:

#ResourceType:   =~""
#ResourceTypeV1: =~""

// ResourceName identifies a CaC resource (the $id attribute).
#ResourceName: {
    namespace?: string
    baseName:   string

// A resource configuration JSON.
#Configuration: {
    $id:     string
    $schema: #ResourceTypeV1

// A resource protobuf.
#Resource: {
    #label:     string
    #projectId: string

    resourceName:  string
    schemaVersion: "1"
    type:          #ResourceType
    content:       json.Marshal(_content)

    _content: #Configuration

These definitions established the base for developing the tests. For each common functionality of the ConfigServices, the tests’ steps are equivalent where each product team provides the resources of their services.

While using the test suite, by declaring the product’s resource, would generate a test that covers every shared functionality, there are configuration services that need to add more cases that are specific to the product. In this situation, CUE also gives the structure using the definition of the test case, steps, requests, and responses.

Test suite driver

After having the tests written in CUE, we needed to use them to test our ConfigServices, and add it to our build, and testing, CD pipeline. First, let’s tackle how to run the tests defined in the CUE files with our services written in Java, and Go. First, we used the ‘cue export’ command that generates a JSON file from each CUE project. The output is our Test file, which includes the list of test cases, using the product’s resources, for each service. We then create a model class and a driver class per language. The former parses the Test file and holds its data, including the list of test cases, test steps, and each Apply, and Pull requests and responses. The latter loops through each test case generating a new unit or integration test, and invoking its set of operations. In our case, we needed the implementations for Go and Java.

Once we had the driver, the next step was creating a test runner that would load the test model, initialize the ConfigService, and run the driver using both. The runner would go through each case by initializing a new instance of the service and resetting all resources for a new test, ensuring the independence between them. Since we use Bazel as our build and testing platform, the final bit of creating the whole driver was integrating it with this tool. For that reason, it was essential to add the rules to export CUE files when building the CUE project. In our case, we found these rules that fitted our needs: rules_cue.

# ================================================================
# CUE support: rules_cue
# ================================================================

    name = "com_github_tnarg_rules_cue",
    url = "",
    strip_prefix = "rules_cue-540ca8c02f438f7ef3e53d64d4e4e859d578cc15",
    sha256 = "8ba5146b61ce07aac98124e60598d5f5e913b0756c618013da0a3f8d78cd29fa",

load("@com_github_tnarg_rules_cue//cue:deps.bzl", "cue_register_toolchains")


load("@com_github_tnarg_rules_cue//:go.bzl", "go_modules")


We added these rules to our Workspace, and we had a complete and integrated language-neutral Test suite that could test any Go and Java ConfigService.

    name = "tests",
    src = "tests.cue",
    visibility = ["//visibility:public"],
    deps = [

    name = "go_default_test",
    srcs = ["server_test.go"],
    data = [":tests"],
    embed = [":go_default_library"],
    deps = [


It was exciting to find a solution that efficiently fitted our needs while using CUE for a new domain. We used CUE to define a schema of the resources and to reduce boilerplate in each test case. By leveraging CUE validation, each team represented the resource that the suite used in each test case. Also, by leveraging the CUE configuration feature, we added new test cases without duplicating definitions because of the CUE test suite’s predefined steps.

Furthermore, designing the test suite using CUE and a light driver enabled other teams to use the tests effortlessly, making it easy to add test options like integration tests and unit tests using in-memory DB. Finally, having more robust services, knowing that we have tests covering every shared functionality is the most significant benefit of this language-neutral test suite.