Quality assurance automation engineers test applications developed in-house, from legacy monoliths to cloud-native applications that leverage microservices. A typical mission-critical application requires a combination of unit testing at the code level, code review, API tests, automated user experience testing, security testing, and performance testing. The best devops practice is to automate running these tests and then select an optimal subset for continuous testing inside CI/CD (continuous integration and continuous delivery) pipelines.
But what about applications, workflows, integrations, data visualizations, and user experiences configured using SaaS (software-as-a-service) platforms, low-code development tools, or no-code platforms that empower citizen developers? Just because there’s less or no coding involved, does that automatically imply that workflows function as required, data processing meets the business requirements, security configurations align with company policies, and performance meets user expectations?
This question reminds me of something my high school calculus teacher taught us. She would say, “If you assume, then you make an ass out of you and me.” In the cases of SaaS, low-code, and no-code, assuming that the app functions as required without a testing plan can lead to many issues:
- Annoyed stakeholders frustrated with unexpected outcomes
- Security holes that expose data to the public or to employees who shouldn’t have access
- Data problems that may propagate to other integrated workflows and customer experiences
- Performance issues when the application scales to many users and larger data sets
- Frustrated IT teams called in to rework applications or develop work-arounds
So, what should be tested? How can these apps be tested without access to the underlying source code? Where should IT prioritize testing, especially considering many devops organizations are understaffed in QA engineers?
I spoke to several experts to help me sort out some answers.
Start by defining and implementing agile acceptance testing
John Kodumal, CTO and cofounder of LaunchDarkly, reminds us in IT that acceptance testing should apply to all applications supported by IT, not just the ones that require software development. He says, “In a traditional SaaS model, the development team performs acceptance testing as part of the normal release testing procedure.”
Defining business and user acceptance tests is an important place to start because most SaaS, low-code, and no-code applications require configuration, and the implementation can follow scrum or another agile methodology. Agile disciplines include writing requirements as user stories with documented pass or fail acceptance criteria. An agile team should manage an agile user’s story like a small functional contract of the business and non-functional requirements.
Defining acceptance criteria is an important first step and should be followed even for SaaS applications that require no coding or limited configuration.
But suppose IT doesn’t take on the responsibilities of defining acceptance criteria and automating these tests. In these cases, the lack of testing creates risks, or business teams take on the task of testing. Neither of these is optimal. IT should be the department responsible for leading the implementation, including the testing functions.
Low-code and no-code require testing the business logic
Low-code and no-code platforms provide an abstraction layer and simplify developing, supporting, and enhancing applications. But when using these platforms, you are still coding business logic, configuring a workflow, defining data processing rules, and choosing access roles. The platform performs the simplification but there’s still the risk that the developer implements the logic incorrectly or doesn’t fully know how to fulfill the requirements in the low-code or no-code platform accurately.
Kodumal adds that testing adds two additional responsibilities. “Testing a low-code solution focuses on testing two different things: testing the business logic that the low-code user is expressing and testing that the structure supporting the low-code solution is working properly. These two types of tests ensure that the application is working the way end-users expect it to work.”
You can test the business logic with tools that capture user interactions through the browser and automate testing these flows. Testing the underlying structure may require reviews of the data models, permissions, forms, reports, and automations to ensure they meet standards and don’t introduce risks.
Andrew Clark, CTO of Monitaur, suggests that automation testing should focus on the workflow and how the application supports a business process. He says, “A good way to test SaaS and low-code applications is to perform basic input and output validation. You will need to create a matrix of key events/actions we expect the system to perform and set up test cases to validate that the system is performing as expected.”
Rosaria Silipo, head of data science evangelism at KNIME takes this one step further and suggests that no-code and low-code applications should follow similar testing standards. She says, “A low-code application should come with its own testing suite, which should follow exactly the same guidelines as for code-based applications: test units, golden tables, graceful exit, and so on. Building a web service without a failure code in the response or a web application without a graceful exit in case of error is just unprofessional, exactly as it would be for a code-based application.”
Use low-code testing platforms and machine learning
Although developing with low code and no code often accelerates the development process and enables easier enhancements, devops teams should still perform testing and configuration reviews.
The good news is that QA engineers can develop tests with low-code testing platforms. Ram Shanmugam, CEO of AutonomIQ, a Sauce Labs company, says, “With low-code testing, you’re using advanced AI and ML techniques, so the process of writing and maintaining test scripts is done through machines. This can significantly reduce the time and cost involved, while also decreasing your reliance on test automation engineers, as normal coders and even non-coders can now generate test automation scripts. Ultimately, testers can now focus on the business needs of the software and ensure the intent of the user is preserved.”
How low-code and SaaS platforms automate testing
If you’re testing the user experience, business logic, data processing, and configuration of your SaaS or low-code application, is that a sufficient validation of quality, reliability, and security?
The overall quality also depends on how the low-code platform or SaaS vendor tests their technology and manages the underlying cloud services and infrastructure. Most platform vendors share their security certifications, service levels, and compliance credentials such as ISO, SOC, GDPR, PCI, and FedRamp. Top vendors also share their release schedules, release notes, known defects, service-level records, and access to webpages to check uptime status. But not as many vendors provide details on their architecture, development standards, and testing practices.
I talked to Martin Laporte, senior vice president of R&D at Coveo, to discuss their approach to testing and deployments. He says, “In a world where components of SaaS platforms are being updated multiple times per day, observability is key in order to detect any change in behavior, like increased error rates or variations in response times. Whenever an anomaly is detected, rollouts must be interrupted with an automatic rollback on the previous working version.”
That’s a high bar for deployment frequency and testing practices and what you hope other SaaS and low-code platforms target. This level of testing, complemented with the development team’s test automation efforts, helps reduce deployment risks, especially for applications requiring high reliability.
Bottom line: If you’re not testing a low-code or SaaS application, well, then you may be making too many assumptions.