Product development companies talk a lot about testing. But knowing what types of tests a software firm runs (and why they run them) can help a client better understand the product lifecycle and software development processes. Arcweb Technologies uses both manual and automated tests to ensure system stability and to uncover bugs. Automated tests are commonly used to aid development while manual tests are designed to identify issues and gaps in user experience. These tests are run before users are allowed into a system after a deployment to ensure that their perception of the system isn’t affected. Just like a structural engineer wouldn’t let someone drive a car across an untested bridge, a software engineer shouldn’t let a user into an untested system. And while no system is ever perfect, comprehensive (and frequent) testing can ensure that we are constantly pursuing perfection.
Here are some of the more common tests we implement and some background on why we run them…
Automated Tests: Unit Tests & Integration Tests
Automated tests are coded, typically in parallel with the feature they’re designed to evaluate. The have a bit more up front costs, but once written, they test substantially faster than a human ever could. So as an automated test suite grows, the amount of time saved against the time it would take a human to do the same work grows exponentially—and time is money!
Under the umbrella of automated tests, unit tests and integration tests are two of the most important. Unit tests test a small piece of code to make sure that it’s doing the specific thing it should be doing with no surprises. Each unit test should test a very specific outcome. As such, it’s not unusual to have a bunch of unit tests for the same section of code, each testing different scenarios. Because each unit test tests a small component, they are generally easier to troubleshoot when something breaks. Unit tests can serve as an unofficial explanation of how the code should work (given what’s being tested and how). This is particularly help for testing code that isn’t used often. A final note on unit testing: unit tests typically do not test a product’s user interface in any way.
Pro Tip: Get original and even exclusive product design, development and management insights and inspiration like the one you’re reading now in your inbox once a month, for free.
Building a digital product?
Integration tests validate the interactions of different components of a system. They operate at a “higher level” than unit tests, often testing an entire feature. Because of this they’re usually not as granular as unit tests. As a result, integration tests are likely to identify unexpected bugs but less obvious about what they are. When looked at as a whole and written correctly, these tests describe the behavior of a system and can be used to evaluate system UI.
Manual Tests: Smoke Test & Regression Tests
There are some things that just get missed in automated tests simply because you aren’t testing for them. Most often these are minor UI issues. (Think margin and padding issues, inaccurate color schemes, etc.) However, enough of those can really cheapen the feel of the product and ultimately impact success. Forcing yourself to view the system through the eyes of the user can not only highlight bugs, but also issues with your UX. These are things that can’t be addressed with automated testing, but can have a real impact on the success of the product.
The first manual test we’ll look at is the smoke test. A smoke test is a minimalistic set of tests with the goal of ensuring that the system isn’t broken in any significant way and that a deployment hasn’t unexpectedly broken anything. As new features are released, the smoke test should be updated and the new features subsequently tested. How long a smoke test takes really depends on the size of the application. Sometimes it’s hours, sometimes it’s minutes. For the products we build, smoke testing is typically built into the deployment process.
Regression testing is an exhaustive and thorough test of the entire system. It is both long and tedious. Typically, it doesn’t make financial sense to run the system’s regression test for every build, especially if you’re deploying frequently. However, major features could warrant the need to rerun regression testing with each deploy. The ideal outcome is the product team being able to confidently say that there are no major problems.
Are there more tests we run? Of course—and we’ll cover them in subsequent posts. But for the time being, getting familiar with automated (unit and integration) and manual (smoke and regression) tests can really help a client understand our pursuit of products people love.