Skip to content

Commit

Permalink
Extend the IT framework to allow tests in extensions (apache#13877)
Browse files Browse the repository at this point in the history
The "new" IT framework provides a convenient way to package and run integration tests (ITs), but only for core modules. We have a use case to run an IT for a contrib extension: the proposed gRPC query extension. This PR provides the IT framework functionality to allow non-core ITs.
  • Loading branch information
paul-rogers authored May 15, 2023
1 parent 10bce22 commit 3c0983c
Show file tree
Hide file tree
Showing 19 changed files with 360 additions and 173 deletions.
59 changes: 26 additions & 33 deletions integration-tests-ex/cases/cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -27,15 +27,17 @@ set -e
# Enable for debugging
#set -x

export MODULE_DIR=$(cd $(dirname $0) && pwd)
export BASE_MODULE_DIR=$(cd $(dirname $0) && pwd)

# The location of the tests, which may be different than
# the location of this file.
export MODULE_DIR=${IT_MODULE_DIR:-$BASE_MODULE_DIR}

function usage {
cat <<EOF
Usage: $0 cmd [category]
-h, help
Display this message
prepare category
Generate the docker-compose.yaml file for the category for debugging.
up category
Start the cluster
down category
Expand All @@ -45,7 +47,7 @@ Usage: $0 cmd [category]
compose-cmd category
Pass the command to Docker compose. Cluster should already be up.
gen category
Generate docker-compose.yaml files (only.) Done automatically as
Generate docker-compose.yaml file (only.) Done automatically as
part of up. Use only for debugging.
EOF
}
Expand All @@ -60,7 +62,7 @@ CMD=$1
shift

function check_env_file {
export ENV_FILE=$MODULE_DIR/../image/target/env.sh
export ENV_FILE=$BASE_MODULE_DIR/../image/target/env.sh
if [ ! -f $ENV_FILE ]; then
echo "Please build the Docker test image before testing" 1>&2
exit 1
Expand Down Expand Up @@ -127,33 +129,33 @@ function show_status {
function build_shared_dir {
mkdir -p $SHARED_DIR
# Must start with an empty DB to keep MySQL happy
rm -rf $SHARED_DIR/db
sudo rm -rf $SHARED_DIR/db
mkdir -p $SHARED_DIR/logs
mkdir -p $SHARED_DIR/tasklogs
mkdir -p $SHARED_DIR/db
mkdir -p $SHARED_DIR/kafka
mkdir -p $SHARED_DIR/resources
cp $MODULE_DIR/assets/log4j2.xml $SHARED_DIR/resources
cp $BASE_MODULE_DIR/assets/log4j2.xml $SHARED_DIR/resources
# Permissions in some build setups are screwed up. See above. The user
# which runs Docker does not have permission to write into the /shared
# directory. Force ownership to allow writing.
chmod -R a+rwx $SHARED_DIR
sudo chmod -R a+rwx $SHARED_DIR
}

# Either generate the docker-compose file, or use "static" versions.
function docker_file {

# If a template exists, generate the docker-compose.yaml file. Copy over the Common
# folder.
TEMPLATE_DIR=$MODULE_DIR/templates
TEMPLATE_SCRIPT=${DRUID_INTEGRATION_TEST_GROUP}.py
if [ -f "$TEMPLATE_DIR/$TEMPLATE_SCRIPT" ]; then
# If a template exists, generate the docker-compose.yaml file.
# Copy over the Common folder.
TEMPLATE_SCRIPT=docker-compose.py
if [ -f "$CLUSTER_DIR/$TEMPLATE_SCRIPT" ]; then
export PYTHONPATH=$BASE_MODULE_DIR/cluster
export COMPOSE_DIR=$TARGET_DIR/cluster/$DRUID_INTEGRATION_TEST_GROUP
mkdir -p $COMPOSE_DIR
pushd $TEMPLATE_DIR > /dev/null
pushd $CLUSTER_DIR > /dev/null
python3 $TEMPLATE_SCRIPT
popd > /dev/null
cp -r $MODULE_DIR/cluster/Common $TARGET_DIR/cluster
cp -r $BASE_MODULE_DIR/cluster/Common $TARGET_DIR/cluster
else
# Else, use the existing non-template file in place.
if [ ! -d $CLUSTER_DIR ]; then
Expand Down Expand Up @@ -205,6 +207,13 @@ function verify_docker_file {
fi
}

function run_setup {
SETUP_SCRIPT="$CLUSTER_DIR/setup.sh"
if [ -f "$SETUP_SCRIPT" ]; then
source "$SETUP_SCRIPT"
fi
}

# Determine if docker-compose is available. If not, assume Docker supports
# the compose subcommand
set +e
Expand All @@ -219,17 +228,6 @@ set -e
# Print environment for debugging
#env

# Determine if docker-compose is available. If not, assume Docker supports
# the compose subcommand
set +e
if which docker-compose > /dev/null
then
DOCKER_COMPOSE='docker-compose'
else
DOCKER_COMPOSE='docker compose'
fi
set -e

case $CMD in
"-h" )
usage
Expand All @@ -238,24 +236,19 @@ case $CMD in
usage
$DOCKER_COMPOSE help
;;
"prepare" )
check_env_file
category $*
build_shared_dir
docker_file
;;
"gen" )
category $*
build_shared_dir
docker_file
echo "Generated file is in $COMPOSE_DIR"
echo "Generated file is $COMPOSE_DIR/docker-compose.yaml"
;;
"up" )
check_env_file
category $*
echo "Starting cluster $DRUID_INTEGRATION_TEST_GROUP"
build_shared_dir
docker_file
run_setup
cd $COMPOSE_DIR
$DOCKER_COMPOSE $DOCKER_ARGS up -d
# Enable the following for debugging
Expand Down
19 changes: 19 additions & 0 deletions integration-tests-ex/cases/cluster/AzureDeepStorage/verify.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------

require_env_var AZURE_ACCOUNT
require_env_var AZURE_KEY
require_env_var AZURE_CONTAINER
3 changes: 2 additions & 1 deletion integration-tests-ex/cases/cluster/Common/dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ services:
# platform: linux/x86_64
image: mysql:$MYSQL_IMAGE_VERSION
container_name: metadata
restart: always
command:
- --character-set-server=utf8mb4
networks:
Expand All @@ -79,7 +80,7 @@ services:
ports:
- 3306:3306
volumes:
- ${SHARED_DIR}/db:/var/lib/mysql
- ${SHARED_DIR}/db/init.sql:/docker-entrypoint-initdb.d/init.sql
environment:
MYSQL_ROOT_PASSWORD: driud
MYSQL_DATABASE: druid
Expand Down
23 changes: 23 additions & 0 deletions integration-tests-ex/cases/cluster/GcsDeepStorage/verify.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------

require_env_var GOOGLE_BUCKET
require_env_var GOOGLE_PREFIX
require_env_var GOOGLE_APPLICATION_CREDENTIALS
if [ ! -f "$GOOGLE_APPLICATION_CREDENTIALS" ]; then
echo "Required file GOOGLE_APPLICATION_CREDENTIALS=$GOOGLE_APPLICATION_CREDENTIALS is missing" 1>&2
exit 1
fi
21 changes: 21 additions & 0 deletions integration-tests-ex/cases/cluster/S3DeepStorage/verify.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------

require_env_var DRUID_CLOUD_BUCKET
require_env_var DRUID_CLOUD_PATH
require_env_var AWS_REGION
require_env_var AWS_ACCESS_KEY_ID
require_env_var AWS_SECRET_ACCESS_KEY
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
PyYaml does the grunt work of converting the data structure to the YAML file.
'''

import yaml, os, os.path
import yaml, os
from pathlib import Path

# Constants used frequently in the template.
Expand All @@ -49,15 +49,16 @@ def generate(template_path, template):
'''

# Compute the cluster (test category) name from the template path which
# we assume to be module/<something>/<template>/<something>.py
# we assume to be <module>/templates/<something>.py
template_path = Path(template_path)
cluster = template_path.stem
cluster = template_path.parent.name

# Move up to the module (that is, the cases folder) relative to the template file.
module_dir = Path(__file__).parent.parent
# Move up to the module relative to the template file.
module_dir = template_path.parent.parent.parent

# The target location for the output file is <module>/target/cluster/<cluster>/docker-compose.yaml
target_dir = module_dir.joinpath("target")
os.makedirs(target_dir, exist_ok=True)
target_file = target_dir.joinpath('cluster', cluster, 'docker-compose.yaml')

# Defer back to the template class to create the output into the docker-compose.yaml file.
Expand Down Expand Up @@ -205,7 +206,7 @@ def add_env(self, service, var, value):
def add_property(self, service, prop, value):
'''
Sets a property for a service. The property is of the same form as the
.properties file: druid.some.property.
runtime.properties file: druid.some.property.
This method converts the property to the env var form so you don't have to.
'''
var = prop.replace('.', '_')
Expand All @@ -230,7 +231,7 @@ def add_port(self, service, local, container):
Add a port mapping to the service
'''
ports = service.setdefault('ports', [])
ports.append(local + ':' + container)
ports.append(str(local) + ':' + str(container))

def define_external_service(self, name) -> dict:
'''
Expand Down
30 changes: 29 additions & 1 deletion integration-tests-ex/docs/compose.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,38 @@ See also:

## File Structure

Docker Compose files live in the `druid-it-cases` module (`test-cases` folder)
Docker Compose files live in the `druid-it-cases` module (`cases` folder)
in the `cluster` directory. There is a separate subdirectory for each cluster type
(subset of test categories), plus a `Common` folder for shared files.

### Cluster Directory

Each test category uses an associated cluster. In some cases, multiple tests use
the same cluster definition. Each cluster is defined by a directory in
`$MODULE/cluster/$CLUSTER_NAME`. The directory contains a variety of files, most
of which are optional:

* `docker-compose.yaml` - Docker composes file, if created explicitly.
* `docker-compose.py` - Docker compose "template" if generated. The Python template
format is preferred. (One of the `docker-compose.*` files is required)
* `verify.sh` - Verify the environment for the cluster. Cloud tests require that a
number of environment variables be set to pass keys and other setup to tests.
(Optional)
* `setup.sh` - Additional cluster setup, such as populating the "shared" directory
with test-specific items. (Optional)

The `verify.sh` and `setup.sh` scripts are sourced into one of the "master"
scripts and can thus make use of environment variables already set:

* `BASE_MODULE_DIR` points to `integration-tests-ex/cases` where the "base" set
of scripts and cluster definitions reside.
* `MODULE_DIR` points to the Maven module folder that contains the test.
* `CATEGORY` gives the name of the test category.
* `DRUID_INTEGRATION_TEST_GROUP` is the cluster name. Often the same as `CATEGORY`,
but not always.

The `set -e` option is in effect so that an any errors fail the test.

## Shared Directory

Each test has a "shared" directory that is mounted into each container to hold things
Expand Down
31 changes: 31 additions & 0 deletions integration-tests-ex/docs/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,37 @@ when it starts. If you start, then restart the MySQL container, you *must*
remove the `db` directory before restart or MySQL will fail due to existing
files.

### Per-test Extensions

The image build includes a standard set of extensions. Contrib or custom extensions
may wish to add additional extensions. This is most easily done not by altering the
image, but by adding the extensions at cluster startup. If the shared directory has
an `extensions` subdirectory, then that directory is added to the extension search
path on container startup. To add an extension `my-extension`, your shared directory
should look like this:

```text
shared
+- ...
+- extensions
+- my-extension
+- my-extension-<version>.jar
+- ...
```

The `extensions` directory should be created within the per-cluster `setup.sh` script
which is when starting your test cluster.

Be sure to also include the extension in the load list in your `docker-compose.py` template.
To load the extension on all nodes:

```python
def extend_druid_service(self, service):
self.add_env(service, 'druid_test_loadList', 'my-extension')
```

Note that the above requires Druid and IT features added in early March, 2023.

### Third-Party Logs

The three third-party containers are configured to log to the `/shared`
Expand Down
Loading

0 comments on commit 3c0983c

Please sign in to comment.