This is basically a high level test strategy document outlining how we can ensure the flight computer software does what it is supposed to do, and doesn't do what isn't supposed to do.
This document describes the scope, approach, and resources required for testing activities. It identifies what will be tested, the risks, assumptions, dependencies, and hardware & software requirements.
It's roughly organized along the lines of IEEE 829-1998 section 4 (see http://www.ruleworks.co.uk/testguide/IEEE-std-829-1998.htm), except sections deemed unnecessary are left out.
The primary item to be tested is the flight computer software. That is the software described by the FCSoftwareRequirements page.
TODO When the following are available, they should be linked to from here:
- Design specification
- Users/Operations guide
- Installation guide
The following features will be tested (see the "Specific Requirements" section of FCSoftwareRequirements):
- Flight Sequencing
- ATV Overlay
- Logging to Flash
- WiFi Downlink
- Safety Requirements
TODO Add links to test design specification for each feature.
The requirement 6c ("The software should process 10 ping packets per second from the ground to Flight Computer") may be one we can go without testing, because we can safely launch and recover the rocket without implementing that feature. Similarly, 6f (about wifi link quality) isn't so important. TBD
We want to verify that the software performs all the functions it is required to under normal inputs and also handles invalid inputs as gracefully as possible.
Functions listed in the FCSoftwareRequirements can be tested with function testing:
- Units of code which are complex, long, or perform mission critical tasks should be unit tested.
- Interfaces between software components, and between the software and the avionics nodes should be tested to ensure all components behave according as describe by the system design.
- Once implementation is complete, system tests should be performed. This will include a number of use cases for simple scenarios such as "change the configuration" to more complex and complete tasks such as stepping through a complete launch sequence. Running through a launch sequence to verify the software performs required tasks like detecting apogee and deploying parachutes will require feeding simulated data into the software from a physics engine which simulates flights. Both the sequence of commands coming from the simulated Launch Control and the simulated flight patterns should vary to simulate a variety of real world scenarios and to more fully test the software algorithms. It is important, though, that the exact inputs fed into the system can be logged so they can be replayed if any bugs are found. We may want to log error frequency as well, so we have an idea of if the software quality is improving over time.
When designing tests, the test name and the requirement(s) it verifies should be recorded in the test design specification list. The purpose of that is for requirements tracing.
Functionality tests should be automated so they can be used as regression tests during development.
Requirements related to performance can be tested with stress testing. For example we might increase the data rates from nodes to above the expected limits to ensure the software is not operating near critical levels with normal data rates. We could also increase data rates until the software fails to find how much of a margin we have. We should measure CPU, memory, and disk usage during normal and above-normal activity levels to ensure they are not higher than we're comfortable with.
White-box testing should be performed, picking boundary-value inputs for unit testing, or interesting inputs designed to test certain paths during system testing.
Tools such as valgrind should be used to verify that the software doesn't have memory leaks. Tools such as gcov should be used to verify the test cases test all code statements.
Static analysis: lint can be used to detect coding errors, and formal methods can be used to prove that the software matches critical parts of the specification.
To test the tests, one idea is "fault seeding", where a defect is deliberately introduced in the code and if the tests all pass, you know something is wrong with them.
Informal code review can be achieved by using a mailing list or RSS feed for code commits, and formal code inspection can be done during weekly software meetings.
One example that would be expanded in the test design/case specifications:
The ATV overlay delay can be tested using something like a virtual-ATV node. For instance, we could change the pressure in the virtual environment, and the virtual pressure sensor would convert that to altitude and relay that to the FC software over USB. The FC software should send a message within a short period (which is? TBD) to the ATV node for overlay.
TBD: what is target % code coverage, and how much code inspection do we want to do?
For most tests, they will either obviously pass or obviously fail.
- Apogee Detect: It is a success if the software detects apogee within APOGEE_WINDOW seconds of the actual time (see glossary in FCSoftwareRequirements for value).
- The percent of data logged to flash: USB is host-initiated, and it is possible that a sensor could take new readings faster than the FC software asks for it. For now the percent of these data-points that need to be logged for this test to pass is TBD.
- The percent of state changes sent over WiFi should be as close to 100% as possible, but it won't kill us if less than that is sent. TBD
- The delay between when data is available and when it shows up on ATV overlay (because we want the image and the on screen telemetry data to be in sync) is TBD.
Verification is considered complete, and the software deemed launch-ready when all tests pass. The software verification tests should be repeated any time the software changes after that, and both verification and validation should happen any time the requirements/specification changes.
The following deliverables will be produced by this plan:
- This test plan document
- List of test design specifications and what requirement(s) they verify.
- Test design specifications
- Test case specifications
- Test procedure specifications
- Test logs
- Test incident reports
- Test summary reports
- Test input data and output data from test runs.
- Test tools, e.g. unit and system test code, node emulator system, bug tracking system and test logging system (currently just the wiki, but we might want something more specialized if the number of tests or the amount of data to log gets large or unwieldy).
- Get avionics nodes and/or node specifications.
- Create software design.
- Create software system test design.
- Implement software.
- Implement unit tests.
- Implement software system tests.
We can test the software on our PCs but we will need to also test it on the actual flight computer (its hardware is different enough that this is very important. For example it has less/smaller resources than a typical PC so performance related tests will produce different results on it).
We need a way for software running on a PC to emulate the avionics nodes. Or, a way of running the avionics node's firmware in a simulated environment on a PC where we can inspect or change the state of the "node".
With CAN, creating simulated node messages was simple using a Serial -> CAN interface. USB is different: it is asymmetric so a host acts much differently than a device, so we may need special software and/or hardware (cables, or the development board) to let the host communicate, or control something that communicates, like the avionics USB devices.
To test the WiFi we'll need a wireless connection between the flight computer and a PC. To test it in the real world setting, we should use the same antennas, signal amplifiers, and separation distances expected to be used on launch day.
Since this document doesn't cover how testing the avionics nodes, it probably won't be necessary for equipment like handheld GPS units, magnetometers, pressurization chambers, acceleration-generators, etc.
To test that the software actually works with the avionics nodes as delivered, we'll need to test the software running on the flight computer with the nodes plugged in. In that case, we may need some of the just mentioned equipment (the stuff-like hand held GPS receivers-I said we probably won't need).
The software team is responsible for designing, implementing, and running the tests.
Scheduling is hard to predict or mandate as this is a volunteer run project, but we can say that we know the first launch of LV2c is planned for June 2007. That launch is planned to be just the airframe plus the recovery node, so it will be sometime after that that we plan on flying with the flight computer and software. Even if we could build a complete avionics package by June, we'd want to test the airframe first. So for now here is an arbitrary, hypothetical, and of course completely idealistic schedule:
- April - May 2007: Avionics nodes built.
- April - July: Flight computer software design & implementation. Unit tests are developed concurrently with units, system test is designed concurrently with system design.
- July - September: system test is implemented and runs after software implementation is complete.
- Saturday, June 16th 2007: Airframe & Recovery node test launch.
- First part of September: Final pre-launch verification.
- September 28-30: Airframe + Avionics + Software launch at BALLS 2007 in the Black Rock Desert.
- World Domination
Obviously, if the software fails, the rocket could fail. That could just mean we waste a trip to the launch site because the flight computer software didn't even work well enough to allow a launch to happen, or worse, the software would allow the launch but fail to detect apogee meaning the rocket might crash (if the manual parachute deployment systems also fail). Therefor, if the software fails verification we may have to postpone a launch.