Version 2 (modified by 7 years ago) ( diff ) | ,
---|
A simple RDBMS test environment with Docker
Introduction
When developing or maintaining the FDO code base, its unit test suite should be run and updated constantly to ensure any changes do not break existing behavior.
While the test suites are easy to run for the FDO core and for file-based FDO providers like SDF, SHP, SQLite, etc. It is a bit more difficult when testing the GenericRdbms-based providers as this requires an actual installation of the RDBMS in question to be able to run the specific provider unit tests against.
With the advent of Docker and containerization, it is very simple to set up an RDBMS test environment for one or more GenericRdbms FDO providers without the overhead of having to physically install the required RDBMSes "bare metal" on the host development machine or needing to stand up an actual server with the RDBMSes installed.
A docker-based environment is easy to setup and easy to tear down, which makes it ideal for spinning up a RDBMS test environment.
Requirements
To setup such an environment, you need an operating system that supports Docker. This can be:
- Windows 10 (with container support enabled)
- Any Linux distribution running kernel 3.10 or later
For a full list of platforms supported by docker, click here
It is recommended that you have 6GB or RAM available to be able to spin up all the RDBMSes together at once.
If you are on Windows 10, you must make sure the Docker daemon is on Linux containers mode.
If your host OS is none of the above (eg. You are on Windows 7/8/8.1), you can use a virtual machine that is running one of the above OSes.
An example docker compose file
With the docker-compose
tool that comes with a docker installation, you can spin up all the required databases to hit your RDBMS provider test suites against.
Here's an example docker-compose.yml
file that defines an environment with:
- MySQL 5.5
- PostgreSQL 9.6 with PostGIS 2.4
- SQL Server 2017
version: "3" services: postgis: image: "mdillon/postgis:9.6" ports: - "5432:5432" environment: POSTGRES_PASSWORD: "changeme" mysql_55: image: "mysql:5.5" ports: - "3306:3306" environment: MYSQL_ROOT_PASSWORD: "fdotest" mssql: image: "microsoft/mssql-server-linux" ports: - "1433:1433" environment: SA_PASSWORD: "Sql2016!" ACCEPT_EULA: "Y"
In the directory where you have this file, you can then run the following command to spin up all 3 RDBMSes at once:
docker-compose up
On the first run, this will download the required docker images for all 3 RDBMSes and then spin them up as running containers.
Stopping the command (eg. By CTRL-C
) will stop all the running containers.
On subsequent runs, the images are not downloaded (as they have been downloaded locally)
Some one-time PostgreSQL setup
The PostgreSQL provider test suite assumes that public
is the only schema of any PostGIS database it creates. The chosen docker image, while fully featured, breaks this assumption as it includes the tiger
schema (for tiger geocoding), which the provider does not use, nor does the test suite exercise.
So before running the PostgreSQL test suite, use a tool like pgAdmin to connect to the PostGIS docker container and make sure to drop any schema that is not public
from the template_postgis
and postgres
databases.
This action only needs to be done once.
Alternatively, you could try a different PostGIS docker image (or roll your own) that does not include these extraneous schemas in their PostGIS database setup.
Running the GenericRdbms test suites against it
Firstly, build FDO, its providers and its unit tests as usual.
Then make a copy of the following files under <FDO_DIR>/Providers/GenericRdbms/Src/UnitTest
:
SqlServerSpatialInit.txt
MySqlInit.txt
PostGisInit.txt
OdbcInit.txt
Windows only: Where required, make sure libmysql.dll
and libpq.dll
are present in the directories where UnitTestMySQL.exe
and UnitTestPostGIS.exe
are present.
Edit these copies so that the service points to the hostname or IP address of the docker host (if the docker host sits inside the virtual machine, then you want to point your configs to the hostname/IP of the virtual machine). Change any credentials if required to match what is defined in your docker-compose.yml
file.
Then it's a case of leveraging the initfiletest
parameter that all GenericRdbms-based unit test executables support to run the test suite against this modified configuration
Here is a simple window batch script that can run the battery of applicable GenericRdbms test suites against this spun up docker environment, assuming:
- You built FDO for
Debug|x64
- Your FDO source tree is in
D:\fdo-trunk
- Your modified init text files reside in
D:\fdo_test
- You want to capture test output for all tests to files in
D:\fdo_results
cd /D D:\fdo-trunk\Providers\GenericRdbms\Src\UnitTest Dbg64\UnitTestSQLServerSpatial.exe -NoWAIT initfiletest=D:\fdo_test\SqlServerSpatialInit.txt 2>&1 | tee D:\fdo_results\Dbg64_UnitTestSQLServerSpatial.txt cd /D D:\fdo-trunk\Providers\GenericRdbms\Src\UnitTest Dbg64\UnitTestMySQL.exe -NoWAIT initfiletest=D:\fdo_test\SqlServerSpatialInit.txt 2>&1 | tee D:\fdo_results\Dbg64_UnitTestSQLServerSpatial.txt cd /D D:\fdo-trunk\Providers\GenericRdbms\Src\UnitTest Dbg64\UnitTestPostGIS.exe -NoWAIT initfiletest=D:\fdo_test\SqlServerSpatialInit.txt 2>&1 | tee D:\fdo_results\Dbg64_UnitTestSQLServerSpatial.txt cd /D D:\fdo-trunk\Providers\GenericRdbms\Src\UnitTest Dbg64\UnitTestODBC.exe OdbcSqlServerTests -NoWAIT initfiletest=D:\fdo_test\OdbcInit.txt 2>&1 | tee D:\fdo_results\Dbg64_UnitTestODBC_SqlServer.txt cd /D D:\fdo-trunk\Providers\GenericRdbms\Src\UnitTest Dbg64\UnitTestODBC.exe OdbcMySqlTests -NoWAIT initfiletest=D:\fdo_test\OdbcInit.txt 2>&1 | tee D:\fdo_results\Dbg64_UnitTestODBC_MySQL.txt
If you want to test against different versions of one or more RDBMSes, assuming a docker image is available for your version of interest, you can edit your docker-compose.yml
to replace or add on the extra docker image (NOTE: Adding extra docker images will impose an additional burden on your system resources).