Functional Tests

The next level up from unit tests are functional tests. Functional tests ensure that a set of code units work together for a desired outcome. They examine a specific feature/function of the software, typically by testing APIs and the user interface, to ensure requirements are met.

Whenever possible, functional tests should be automated and incorporated into our continuous integration process. There is no single best test framework to automate functional tests. The method used is dependent on several factors, including the language being tested, the preconditions or state needed to run a test, the results generated by a test and the method of verification.

There is no hard and fast rule as to how many functional test cases to write. Usually functional test cases are derived from user requirements, stories, or external documentation. The challenge in testing FOSSology is to understand how the system is supposed to work and then design tests to prove/disprove that it works that way.

At a minimum, functional tests should test:
  • all valid inputs in the ranges supported.
  • invalid inputs should be supplied, and the code should error or die gracefully.
  • all APIs (if they exist)
  • all cli options. For example, if a program has a cli interface like
foo -h [-d dir] [-c comment] [-x] -f <file>

Then every option should be tested for valid values, and then every option should be tested for invalid values.
For example:

  • foo <no parameters> (expect help message)
  • foo -h (expect help message)
  • foo -d <invalid dir> (should produce an error)
  • foo -f <no file> (should produce an error)
  • foo -c <no comment> (should produce an error)


Keep your test cases small, and have them test only one thing (if possible). There should always be more than one test for any given module.
Examples of automated functional testing in fossology using phpUnit, shell unit (shunit2) and a homegrown test harness are in our subversion source tree. The shell unit and homegrown test harness are discussed in more detail below.

An Example using shunit2

Tests utilizing PHPunit and shunit2 demonstrate how to use the builtin assert tests to check for an expected test result. The basic structure looks like this:

  1. Setup - initialize variables, data, etc.
  2. Test - execute the functional test(s)
  3. Verify - verify results using the built-in asserts
  4. Teardown - perform cleanup and trigger the built-in reporting mechanisms

Using shell unit, steps 1,3 & 4 are provided by shunit2. Here is a greatly simplified example, commented with bold italics:

#! /bin/sh
# Copyright 2008 Kate Ward. All Rights Reserved.
# Released under the LGPL (GNU Lesser General Public License)
# shUnit2 -- Unit testing framework for Unix shell scripts.
# Author: (Kate Ward)

shunit2 specific initialization happens first

# constants
# variables
__shunit_lineno=''  # line number of executed test
# counts of tests
# counts of asserts

asserts and helper functions

# assert functions

# failure functions
# Records a test failure.

# suite functions
# Adds a function name to the list of tests schedule for execution.

# Stub. This function will be called before each test is run.
# Common environment preparation tasks shared by all tests can be defined here.
# This function should be overridden by the user in their unit test suite.

# Stub. This function will be called after each test is run.
# Common environment cleanup tasks shared by all tests can be defined here.
# This function should be overridden by the user in their unit test suite.

These next functions do all the grunt work for you.

# internal shUnit2 functions


Here's where the magic happens...

# main

# provide a public temporary directory for unit test scripts
mkdir "${SHUNIT_TMPDIR}" 

# some other housekeeping takes place here ...

# execute the oneTimeSetUp function (if it exists)

# dynamically build a list of tests to execute
if [ -z "${__shunit_suite}" ]; then
  shunit_funcs_=`_shunit_extractTestFunctions "${__shunit_script}"`
  for shunit_func_ in ${shunit_funcs_}; do
    suite_addTest ${shunit_func_}
# execute the tests

# ask the plugin to do any final reporting

# ask the plugin to finish itself

# execute the oneTimeTearDown function (if it exists)

# that's it folks
exit $?

Here's an example of step 2; the tests to execute:

#! /bin/sh
# test to see if the file exists
  if [ ! -f '../../../testing/dataFiles/TestData/licenses/Affero-v1.0' ]; then
    fail "ERROR: test file not found...aborting test" 

  out=`/usr/local/etc/fossology/mods-enabled/nomos/agent/nomos ../../../testing/dataFiles/TestData/licenses/Affero-v1.0`
  assertEquals "File Affero-v1.0 contains license(s) Affero_v1" "${out}" 

# test to see if the file exists
  if [ ! -f '../testdata/empty' ]; then
    fail "ERROR: test file not found...aborting test" 

# echo "starting testOneShotempty" 
  out=`../../agent/nomos ../testdata/empty`
  assertEquals "File empty contains license(s) No_license_found" "${out}" 

# test to see if the file exists
  if [ ! -f '../../../testing/dataFiles/TestData/licenses/gpl-3.0.txt' ]; then
    fail "ERROR: test file not found...aborting test" 

# echo "starting testOneShotgpl3" 
  out=`../../agent/nomos ../../../testing/dataFiles/TestData/licenses/gpl-3.0.txt`
  assertEquals "File gpl-3.0.txt contains license(s) FSF,GPL_v3,Public-domain" "${out}" 

An Example (written in python) of a homegrown test harness

In the example above, the built-in asserts are used to verify the test. If the built-in features of a unit test harness (like phpUnit or shunit2) do not meet your needs, you may want to look at Alex's example of a homegrown test harness. His python module reads a simple xml file describing a set of functional tests and executes them using user defined actions tailored to test the scheduler. As an example, here's the xml code that tests the scheduler's stop command using 2 user defined actions, schedule and sleep:

  This starts a scheduler and waits for it to stop running the startup tests,
  it then sends a stop command. Since the scheduler doesn't have any currently
  running agents, this will cause the scheduler to stop running immediately.
  <testsuite name="JustStop">
    <definitions pwd       = "{$pwd}" 
                 config    = "{pwd}/scheduler/agent_tests/agents" 
                 log       = "{config}/fossology.log" 
                 agentdir  = "{pwd}/scheduler/agent" 
                 scheduler = "{agentdir}/fo_scheduler" 
                 cli       = "{agentdir}/fo_cli"/>
      <sequential command="{cli}" params="--config={config} -S" retval="0" 
                  result="scheduler:{pids:0} revision:{BUILD:SVN_REV} daemon:0 jobs:0 log:{log} port:{FOSSOLOGY:port} verbose:952"/>

    <!-- Stop the scheduler and make sure it isn't running anymore -->
    <test name="scheduler stop">
      <sequential command="{cli}" params="--config={config} -s" retval="0"/>
      <sleep duration="5"/>
      <sequential command="{cli}" params="--config={config} -S" retval="255"/>

The current list of user defined actions for testing the scheduler are:

  1. def concurrently(self, node, doc, dest): This executes a shell command concurrently with the testing harness.
  2. def sequential(self, node, doc, dest): This executes a shell command synchronously with the testing harness.
  3. def sleep(self, node, doc, dest): This action simply pauses execution of the test harness for duration seconds.
  4. def loadConf(self, node, doc, dest): This loads the configuration and VERSION data from the fossology.conf file and the VERSION file.
  5. def loop(self, node, doc, dest): This action actually executes the actions contained within it.
  6. def upload(self, node, doc, dest): This action uploads a new file into the fossology test database so that an agent can work with it.
  7. def schedule(self, node ,doc, dest): This action will schedule agents to run on a particular upload.
  8. def database(self, node, doc, dest): This action will execute an sql statement on the relevant database.
  9. def dbequal(self, node, doc, dest): Checks if a particular row and column in the results of a database call are an expected value.

Additional actions can be added as needed with CreateAction. To write a new type of action write a function with the signature:
actionName(self, source_node, xml_document, destination_node)

  • The source_node is the xml node that described the action, this node
    should describe everything that is necessary for the action to be
    performed. This is passed to the action when the action is created.
  • The xml_document is the document that the test results are being written
    to. This is passed to the action when it is called, not during creation.
  • The destination_node is the node in the results xml document that this
    particular action should be writing its results to. This is passed in when
    the action is called, not during creation.

Additional Resources

shell unit


python code and xml file for home grown test harness