Profile cover photo
Profile photo

Building a Test Automation Team

There is no shortage of discussion in the software industry about who should own the responsibility for designing, creating, and maintaining test automation. Automation roles and team structures vary across different companies, teams, and development scenarios. The concern for test automation roles tends to elevate when startups move into a growth spurt, or when a development team needs to increase the size of the QA team. You’ll face the question sooner or later. As tools and applications become increasingly complex, it will serve you well to consider how best to organize testing staff: Should you organize a dedicated test automation team, and how should it function?

Generally, there are two schools of thought on dedicated test automation teams: those who advocate for a distinct group of automation staff, and those who believe that testers and QA engineers should manage both manual testing and automated testing. Those who’ve been working on smaller teams often change their thinking as complexity increases. A growing application, integration challenges, newer technologies, and a large test suite are factors that will induce a team to reconsider if it’s sensible to devote some resources exclusively to testing automation.

Facing the challenges of test automation

Recently in the Functionize blog, we have written on both the technical and organizational challenges to test automation. If you’re ready to face those challenges, it’s important to realize that there are also challenges to building a test automation team—especially if the testing subject is a large application that has little or no automation in the development pipeline.

Elon Musk, CEO of the automaker Tesla, concedes that he relies too much on automation, way too early: “You really want to get the process nailed down and then automate, as opposed to assuming you know what the process will be, then automating that,” he said in an recent interview with the Wall Street Journal.

While it’s important to know what to automate, software teams will eventually need some degree of test automation—and decide how to partition the responsibilities.

In this article, we look at major factors that go into the decision to form a separate automation team—especially duplication and a cohesive testing apparatus. We also provide guidelines for forming an automation team and review a number of best practices.

Most teams have developers writing software and running unit tests to get immediate validation on their code. Simultaneously, QA testers write test code to confirm unit tests and validate all integrations and perform comprehensive testing. This arrangement will work well while the application is simple and small. As functionality grows in volume and complexity, the serious questions must be answered. The first is: Is there excessive test coverage duplication? Also, it’s important to ask: Is there a cohesive structure in the test suite—throughout the entire stack?

Eliminating duplication

This is a perpetual challenge for all software development teams. To address duplication, it’s important to recognize the root causes. If two different groups have ownership for different sets of test, a natural division will exist—which will result in some degree of duplication. Implicitly, developers assume that testers know what is necessary to test in addition to their unit tests. And, of course, the testers assume that developers are testing “the basics”.

While this initially seems to be a sensible approach, these general dispositions effectively serve to erect a wall between the two sub-teams. The result is that testers spend much of their effort seeking to understand what is best to add to the test suites, and also augmenting and automating those test suites. This seems to be good practice since the testers are doing their job writing tests. But, for too many teams, there is an opportunity cost to this narrow scope of the effort.

There are other essential QA tasks, such as collaborating with business/functional analysts to shape and refine use cases, performing deliberate exploratory testing, and engagement with the product owner to consider any feedback. Without participation in these other valuable tasks, the QA team spends way too much effort on one side of “the wall”. Among other issues, this results in unnecessary duplication of testing efforts that are done by both the developers and the testers.

Another problem is that testers move blindly forward to create the QA verification layer without investing valuable time to acquire in-depth knowledge of the tests that the developers have built. This is another major source of duplication and wasted effort.

Pursuing a cohesive testing apparatus

In deciding whether it’s best to dedicate specific team members to testing automation, another important question to ask is: Are the test suites structured cohesively in support of the entire application stack? The team dynamics given in the previous section often result in an organization in which two subteams are contributing separately to a single codebase. If both subteams have adequate skill levels and experience in working with code, then it’s feasible for one or several team members to take on responsibilities for integrating testing efforts, tests, and testing resources. This dedicated automation team can work toward better automation and minimizing the duplication of testing effort and the overall time needed for testing.

Guidelines for a test automation team

There are many challenges in building a robust testing infrastructure since the application undergoes constant change. The developers change and augment the source code in response to new use cases and product requirements. Any responsible development team will view the source code as a high-value repository that must be carefully managed and maintained. Similarly, tests and test code should be treated with equal care. Developers and their managers acknowledge that a high degree of discipline is necessary to build feature-rich, high-quality software. It is critical to cultivate and maintain the same mindset in the QA team. Generally, these guidelines are ideal for cultivating a solid automation team:

A test automation engineer should have essential skills to work production code, as necessary.
A team member (developer or tester) should only modify production code if s/he has the skills to automate tests, as necessary.
If there is a deficiency in either of the above, then that team member should pair with a mentor who can help attain this level of proficiency.
Eventually, the entire team must collectively make contributions and assume full ownership of the test code, not certain individuals or disciplines within it.
This is ideal and can be difficult to implement for many teams that don’t already have it in place. At a minimum, any existing test automation team must operate by these guidelines in order to achieve success in the long run.

Time to Market

In growing teams, successful test automation requires a significant amount of effort—and should probably become a full-time job for at least one team member. There is much work to be done in validating all of the features, integrations, and systemic functionality. When an application grows, it becomes a job in itself to build a test framework, maintain assets, and manage a continuous stream of changes. Meeting delivery schedules with high-quality products requires that the team make it a top priority to ensure comprehensive, integrated testing. This is simply non-negotiable. It is also vital to invest significant effort to eliminate duplication.

If your application has grown in complexity and you haven’t formed a dedicated automation team, then the automation is unlikely to get done. It certainly won’t be done nearly as well as it could be. That’s because testers will tend to keep their focus on manual testing, manually identifying bugs, and excessive duplicate bug reporting. Conversely, asking the same team of testers to prioritize automation will likely result in a decrease in the efficacy of manual testing.

If you find that you need to have the QA team manage both roles simultaneously, it is perhaps best to engage a support team that will provide libraries and templates that the QA team can use in its automation efforts. Since the support team won’t be automating tests, they won’t be part of the automation team. It’s also good to direct the QA team to focus primarily on automating acceptance tests and focus on other automation tasks following the next product release.

Cultivating an automation team

If your testers have a strong orientation to focus on manual testing, encouraging them to embrace automation is challenging because:

They likely don’t have the skills to effectively automate.
They may lack motivation.
The entrenched mentality may be that manual testing must always take the highest priority. Since the product must ship on time, any thought of automation will naturally be a low priority.
Consolidating teams such that only one team does all the manual and automation testing can work well if care is taken to monitor it all very closely. But it can easily collapse. If it proves to be impracticable to devote some staff entirely to test automation, perhaps you can implement the following:

Ensure there are solid testing reviews and cross-training among the developers and testers.

Allocate enough time following the release for completing automation that corresponds to that release.

Conduct paired code reviews with developers and testers.

If possible, encourage at least one QA team member to devote partial time to focus on frameworks, tools, and cross-pollination. Reduce their manual/functional testing load so that they can devote valuable time to automation.

As the company grows, work to build a team that aligns with strategy, improves quality, and testing performance. Increasing scale and complexity are really impossible to do without automation. As you grow the team, work hard to improve your processes and hire some testers that have automation skills.

The takeaway

Not all QA teams are alike. It’s necessary to adapt to company culture, resource availability, budget constraints, and different application types. Many teams have to work through a transition period of having two groups—one for manual testing and the other that focuses on automation. If you keep your eye on the ideals, you can eventually cultivate a team of testers that are all fairly proficient at some level of automation and test code maintenance.
Add a comment...

The Importance of Planning your Tests

In many companies, especially smaller start-ups, testing is often an afterthought, with CEOs and managers sometimes viewing it as wasting valuable resources. But as we all know, without proper testing you risk releasing buggy software that damages your reputation and loses you customers. However, there is no doubt that testing can consume a lot of resources and so it’s essential to plan it carefully.

The need for test planning

Testing should be a key part of any project plan, whatever development model you are using. The traditional model for software development as encapsulated in the Kanban board views testing as a distinct phase that happens after development and before some form of monolithic release. By contrast, Continuous Integration and Delivery (CI/CD) releases new code constantly, with testing happening continually during development. In both cases it’s essential to plan the testing carefully.

A well-planned test suite can save significant heartache for companies. However, test planning requires as much care as project planning. Testing should be viewed holistically to ensure all tests are properly thought through. Often individual tests can be combined into more efficient flows that test several functions at the same time. Other times it may be possible to test functionality simply by comparing system state before and after the test.

UI testing is one area where planning is harder. In a UI test you are not just interested in whether the system ended in the correct state. The process of getting there matters too. Let’s take a simple example. Imagine you have a small e-commerce site you are testing. The test is simply to see if you can order an item and pay for it. So, your test consists of searching for the item, clicking to add it to the basket and then going through the payment flow. The test would seem to be successful if the correct item ends up being purchased. But as this is a UI test you also need to check for some other elements. What if the item is correct, but the photo is wrong? Or the description is in the wrong language? What if there’s a button missing on the page, or in the wrong place? Unless you specifically add checks for these to the test, your test isn’t thorough enough.

Types of tests

As we have seen in previous blogs, there are many types of test to consider when planning. These include:

Unit Tests which are used by developers to test the functionality of specific software units.

Integration Tests are used to test that different software modules are working correctly together.

Functional Testing looks at the functionality of the whole system. These tests should be the central part of your test plan.

Smoke Tests are used to quickly check that new builds aren’t broken and should be designed to test all essential functionality for the system.

UI Testing is essential for any software with a major UI element. As mentioned above, it differs from other forms of testing in that it matters what state the system (UI) is in during the tests.

Performance Testing is critical to ensure your system doesn’t fail the first time it sees a surge in customers.

Regression Testing ensures new code isn’t triggering old bugs that were previously fixed. It also highlights cases where bugs only become apparent after new code has been added.

User Acceptance Testing is used to check that your software actually delivers the necessary functionality for the end users.

A robust test plan should encompass elements of all of these. Of course, if your system is sufficiently simple you may be able to combine or collapse some of these, but you should be aware that you are doing this.

Test planning vs. planning tests

It’s important to highlight the subtle difference between test planning and planning your individual tests. Your overall test strategy needs to be planned to make sure you are testing every element that you need to. However, every test you create also needs its own detailed test plan. It’s easy to confuse these two concepts, especially since “Test Plan” is often used interchangeably for both. One way to avoid this confusion is to adopt the phrase “Test Suite” to refer to a set of tests and reserve “Test Plan” to refer to a single test.

A good way to think of this is to imagine each individual test as a piece of a jigsaw. As with a jigsaw, the detailed picture on the piece matters and, importantly, it has to fit exactly into the other pieces round it. In this analogy, the overall test plan is the completed jigsaw, with all the pieces neatly fitting together. However, unlike normal jigsaws, in testing the pieces can be assembled in lots of ways to construct the different tests mentioned above.

UI test planning

As we have already seen, planning UI tests needs particular care. When creating a functional test, the desired outcome of the test, positive or negative, is clear. With UI tests you need to think more carefully. Firstly, you need to know what to expect in the UI. For this, you can use the designer’s wireframes. Next, you need to decide which functionality is most critical, and which elements of the UI need to be checked. Thirdly, you need to note down the series of interactions needed. The important thing is to work out which of these can be left loosely defined (e.g. search for a product) and which need to be more precise (e.g. purchase product X for $20.99 and pay with a Visa card). Finally, you can formalize each set of steps into a test plan.

Traditionally most UI testing was done manually. However, nowadays much UI testing is automated. Selenium IDE makes it easy to record test plans simply using point-and-click. But the problem is that you need to create your test defensively and not make assumptions about the page having loaded correctly. During a manual test this is easy – it is immediately obvious if the wrong page has loaded, and if you know what to expect it’s also obvious if there are missing UI elements.

When recording a test with a point-and-click recorder it’s important to build in the same sort of checks. So, after each page change, you should find the elements that you can check to ensure the page loaded correctly. Take the example of the e-commerce site we mentioned at the start. Having searched for a known item, the next step of the test might look like this:

Check that the correct item appears at the top of the page of results.
Check that the correct description is shown.
Confirm that the correct picture is shown.
Check that the correct price is shown (possibly including a check on the currency).
Check that the “Buy” button is visible.
Check that the page header and footer loaded correctly (this can be done by looking for the presence of known elements.).
If you are being really thorough you would include more checks for more UI elements that should be visible in the page.

Other aspects of test planning

There are a couple of other important aspects that go into successful test planning, be it for UI testing, smoke testing or user acceptance testing. Firstly, it is essential to ensure the test environment is as close as possible to the production environment. All too often I have heard of cases where all tests pass in the test environment, only to fail in production due to some missing library, or some library version conflict. A classic gotcha like this was when I was testing a mobile app and discovered that when testing on a mobile network, many API calls were getting blocked by a transparent proxy. This hadn’t shown up earlier while testing on Wi-Fi.

Secondly, you should always plan tests defensively. This means testing for negative events as well as positive ones. Taking our e-commerce example from before, this might mean deliberately searching for products that are missing or are known to be out of stock. Thirdly, having created all your test plans it is important to look over the whole suite as a whole. Often you will then spot ways to save testing time (and resources). For instance, by ensuring that the state the system ends up in after test X is the starting state needed for test Y.


In this blog, we’ve touched on why test planning is so important and have looked in a bit more detail at what makes UI testing particularly challenging to plan. The addition of Artificial Intelligence to UI testing can help you create more loosely defined tests. But even then, you have to be extremely aware of what you are actually training the system to look for. In conclusion, test planning takes quite a bit of time and effort along with a certain amount of skill. But when done properly it will save you a lot of pain in the longer run.
Add a comment...

AI-Autonomous Models That Are Transforming Business

As we saw in a recent blog, Artificial Intelligence (AI) is transforming our lives in ways that could only be imagined just a few years ago. Increasingly, we are now seeing major businesses deferring important business decisions to autonomous artificial intelligences. AI offers some very attractive benefits for big businesses – an AI won’t decide to move job, they work uncomplainingly 24 hours a day, 365 days of the year, and often, in their narrow area of expertise, they significantly outperform humans. In this blog, we will explore how such autonomous AI is being used to create new models for business.

Brief Overview of Autonomous AI

The Merriam-Webster dictionary defines autonomous as “existing or capable of existing independently”. One of the most exciting things about AI is the ability to create autonomous computer systems, that is, computer systems that are capable of making decisions without human intervention. We are all familiar with the idea of autonomous cars, which combine AI with multiple sensors to identify the road ahead and avoid any hazards. But Autonomous AI can now be applied to many different areas of business and life.

AI is the generic term for any computer system that is able to display human-like intelligence. One of the most important current areas of AI development is Machine Learning (or ML), where a set of data is used to train an algorithm to recognize certain patterns in a dataset. When the algorithm is presented with new data, it is then able to identify those patterns. In most cases, the model will keep trying to learn from all the new data it receives.

“Big Data” is a term that is often bandied about, usually by marketing departments who like the sound of it. But the simple fact is that data is a fundamental requirement for any AI system. As a result, one of the biggest contributors of the AI revolution has been a revolution in the collection, analysis, and retention of data.

Deep Learning is a subset of Machine Learning that uses neural networks that simulate how a human brain works and is able to exhibit traits such as short-term memory and complexity. Deep Learning can create models that can be used for things like image recognition and natural language processing.

Business Models for Using Autonomous AI

In a report from 2017, the Harvard Business Review (HBR) identified 4 models for using autonomous AI in business. These are the Autonomous Advisor, the Autonomous Outsourcer, the World-Class Autonomous Employee and the All-in Autonomy. Each of these models can be found being used somewhere in the business world.

The Autonomous Advisor replaces the role traditionally taken by management consultancies like Bain and BCG. Here, the autonomous AI is given the task of reviewing critical business data and drawing inferences from it in order to make recommendations for senior management. Just as with the use of management consultants, this model can lead to workforce malaise, as seemingly random dictats filter down through the ranks of management and are blamed on “the consultants”.

The Autonomous Outsourcer model is associated with companies like Accenture, indeed in the HBR report, they use the portmanteau “Accenturazon” (a combination of Accenture and Amazon) to sum up this model. In this model, businesses use AI services that are hosted in AWS, Azure or Google Cloud. These can provide many useful functions that are beyond the budget of a business to develop themselves. However, because the service is outsourced, it’s essential that there is proper management oversight.

The World-Class Autonomous Employee encompasses cases where AI is employed to perform a specific function better at it than any human would. Here the AI is viewed as an integral part of the team, and it is included in all management and business processes. Unsurprisingly, this model is favored by many of the tech giants like Google and Netflix.

The final model, All-in Autonomy, refers to businesses where an AI entity has complete control over some significant business function. The best example here is the algorithmic trading companies, who use AI to improve their performance, knowing AIs can make trades faster than a human ever could.

One point highlighted in the article is that companies shouldn’t mix up these models. Each model requires a different management approach and will have different impacts on the existing workforce. They also introduce the idea of the Chief AI Officer – an executive with responsibility for all AI within the business. Certainly, as AI grows in importance, businesses will have to employ more senior managers with expertise in data science.

Examples of Autonomous AI in use

Now that you know a bit about AI and the models for applying AI in business, let’s look at some examples of how these models are already being used across various industries.

Autonomous Advisor

While many companies are developing AI solutions internally, there are also many other startups who are building their market on providing AI services to other businesses. One such company is Levadata. Levadata was born out of a real business need to cut costs during the recession in the early part of this decade. Originally it was an internal project that applied AI to analyze data from suppliers and external data sources in order to strengthen the company’s negotiating position. The project was so successful that a decision was made to spin out a separate company. They now provide AI-driven procurement analysis using a classic SaaS model.

Autonomous Outsourcer

As mentioned earlier, businesses are becoming increasingly data-driven. Without sufficient data, no usable AI algorithms can be created, and they need a constant stream of data to process. However, one of the big challenges is the sheer scale of data most businesses have to manage. Data management is a multi-faceted problem. Firstly, you have to define the structure and/or metadata for your data. Secondly, you need to store it in a database, object store or bulk data store. Thirdly, you need policies for backup, management, access, etc. The typical solution is to create a Data Warehouse. But this can be prohibitively expensive or complex.

Oracle have solved this by creating the Autonomous Data Warehouse Cloud. This is designed to automatically create a distributed database for your data and provide a simple interface for storing and retrieving the data. The system can scale elastically and is self-administering. In a Forbes article, Paul Daugherty (CTO at Accenture) says “This gives database superpowers to business people that they’ve never had before”.

World-class Autonomous Employee

Some tasks are traditionally a job for experts with years of experience and knowledge. One such is patent searching. For many businesses, protecting their IPR is essential, and to do this properly, they need to understand the patent landscape. Companies like, Elementary IP, and Cipher all produce tools that use AI to improve the process of patent searching. In effect, their AI is replacing (or at least augmenting) the role of the patent searcher.

Another example was when Google announced that they had used their Deep Mind system to improve the PUE (efficiency) of their data centers by some 40% over and above what their own experts had achieved.

All-in Autonomy

As mentioned above, the classic example of the application of All-in AI Autonomy is in algorithmic trading. Here the entire process of making money by trading has been handed over AI models that dictate exactly when to buy and sell, making their decisions in mere nanoseconds. In this sort of trading, speed is everything, and many of these companies invest huge amounts in R&D to get the fastest systems possible (often specialized FPGA devices in the place of conventional CPUs). Without the AI model, there would be no company.


As we have seen, AI is a major part of modern business and has led to the creation of new models for running businesses. By carefully applying these models to their businesses, C-suites can improve the performance of their companies, protect the bottom line and ensure they are future-proofing themselves. Google are such strong advocates for the use of AI, they have set up the Google AI initiative to share knowledge and empower more people to understand and use AI in their businesses and lives.

Some areas of business are better suited than others to leverage AI. Within the world of software development, UI testing is one of the biggest areas that can benefit from AI. Here at Functionize, we believe that AI can solve many of the really hard problems relating to UI test automation at scale. Ever since Tamas, our CEO, first came up with the idea to use AI to improve test automation, our whole company has been focused on nothing else. Over the last 4 years, we have been constantly developing new approaches to help you. We are now able to offer several models for how to integrate AI into your testing strategy including as an autonomous expert, able to create self-healing tests or as an outsourced AI-driven testing environment using the Functionize Test Cloud.
Add a comment...

Testing Automation Success in an Agile Environment

Even though agile software development has become quite common, many teams continue to grapple with achieving even modest levels of test automation. Agile methodologies present significant challenges to any automation team. The essence of agile is more frequent software releases and increase in team collaboration, but this often results in too many iterations, ambiguous project scope, and little or no documentation.

Lamentably, and often unnecessarily, test automation initiatives often fail to deliver. This is primarily due to these factors:

Frequent failures — High-complexity, interconnected systems, and applications often contain a variety of test environment inconsistencies. These thwart test automation efforts in many ways, and often produce false positives. The extra effort is tedious, burdensome, and decreases motivation for continuing with automation efforts.

Costly, extensive maintenance — Conventional script-based test automation requires frequent updates to keep up with a high-speed, dynamic delivery process.

Performance — Moving to simple automate conventional tests often results in long execution times. This means that it becomes impracticable to run an adequate regression against each build, which also means that the team doesn’t get accurate, immediate feedback on how recent changes impact user experience.

Many software teams are looking to testing automation as they seek to cope with continuous integration/continuous development—which is a common delivery framework for agile teams. Although automation can eventually achieve the efficiency that enterprises need for critical, repetitive, and complex test processes, it can become a disaster if there is a lack of planning and analysis.

Perhaps your agile team now realizes the need for testing automation but is wary of embarking on a potentially treacherous journey. Or, maybe you’ve been trying—without much success—to achieve effective outcomes in your automation efforts. Here, we consider the main challenges to this pursuit, how to address those challenges, and how to increase the probability of success.

Automate in parallel

A major reason that teams that attempt to implement test automation don’t achieve their quality objectives is that agile development is all about short iterations in a continuous delivery pipeline. Shorter, frequent sprints usually result in more bugs, which require more fixes. It becomes difficult to find the time to identify, fix, and test the products of each iteration. In pursuit of test automation, the mindset and the culture must change before any success can be realized.

While it is very challenging, success is only possible when enough time is allocated for testing—and automation efforts can proceed alongside the development sprint. Otherwise, the entire pipeline will lag, or release quality will continue to decrease. Parallel testing and automation will also eventually be more responsive to new requirements and increase team productivity. Some testers will also have additional capacity for exploratory testing—which is necessary even in the most highly automated environments.

Build robust tests

There’s really no way around it. Testers must build tests that can be readily integrated into the regression suite. Both scripted or scriptless tests must be built with sufficient flexibility to accommodate long-term regression testing requirements, and also satisfy these criteria:


The objective here is to consistently execute automatic, accurate, smooth, high-performance regression testing with minimal intervention from any tester. If the test scripts are stable and sound, testers can finish the regression phase while avoiding unnecessary modifications. Speed and accuracy improvements are sure to follow.

Pursue DevOps integration

Solid DevOps integration—development, testing, and operations—is essential to supporting an effective agile development team. DevOps enables cross-functional collaboration that is vital to rapid feature development and the automation that is necessary to support continuous delivery. DevOps is critical to a shared-work environment in which development, code integration, and automated testing need to happen in real-time.

Evaluate and deliberate over automation tools

If you don’t invest quality time to evaluate capabilities of automation tools, you’re likely to make poor investment decisions. Prior to any test automation tool purchase, it crucial to ensure it will the success of your test automation efforts. To avoid wasting time, money, and unmet expectations, look for automation tools and technologies that:

Readily handle the automation of unit and end-to-end testing.

Provide easy-to-use interfaces, features, and navigation.

Seamlessly aid in accumulating and maintaining a suite of regression test.
Return test results quickly.

Automatically detect function/feature changes and self-heal / self-adjust tests as necessary.

Provide solid support for integration with other test management tools, test management tools, bug tracking tools, and continuous delivery setups.

Don’t miss out by omitting Functionize from consideration. Functionize is the first adaptive cloud-based testing platform to leverage machine learning to accelerate software development by significantly improving your testing capabilities. Functionize significantly minimizes testing infrastructure and seamlessly integrates with virtually any CI/CD environment. (Yes, that was our shameless plug. Continue reading for more advice on how to improve your chances for success in test automation.)

Keep tests concise and efficient

Ensure that your test cases are concise and lean. Not only will it be much easier to use only the test data that is necessary to achieve expected testing outcomes. Feature-specific, small-footprint test cases also contribute to a solid, manageable regression suite that is easy to maintain—especially for environments that contain various programming languages, scenarios, and configurations.

Make time to compile relevant test data

Don’t skimp on your test data, since it is vital for test automation success. Take time to optimize the size of the datasets, and ensure that the data itself is suitable for the application(s) that you’re testing. As necessary, combine and separate the data into categories such as invalid data, valid data, and boundary conditions. Various data sources might include XML-generating database, a structured DBMS, text or Excel files Also ensure that the data is current and devoid of obsolete values.

Periodically conduct test reviews

Periodically review your test cases and data to ensure it contains all necessary updates and verify the validity of all tests. To avoid inefficiency, bloat, and potential secondary problems, make a strong effort to identify and archive tests that have become irrelevant to current test cycles. As appropriate, validate the substance and functionality of those tests which are most likely to have an enduring impact on your test automation program.

Continuously monitor the development environment

It’s also important that testing teams frequently track changes to all development and staging environments—including additions or modifications to cloud environments, complex virtual machine clusters, and external databases. Anomalies, issues, and defects can lurk outside the core application—in the integration frameworks, network configuration, services, and databases. A clear, precise understanding of all core and supporting environments goes much further toward keeping the team focused on achieving quality targets. That is certainly preferable to blindly scrambling to find root causes.

Following these suggestions and pursuing excellence will help your team to realize a high return on the investment of your time and capital. Dedication and perseverance will result in a high degree of automation and higher levels of quality in your deliverables. Over time, you’ll also benefit from faster performance and increases in testing efficacy.
Add a comment...

What is Selenium Grid | Scaling Test Executions

Ever since websites grew beyond simple pages of information with photos and started offering interactive services such as online shopping, social media, and video, proper UI testing has been essential. However, testing web applications is often far more complicated than testing an equivalent mobile app, since there’s a much broader matrix of test variables. For instance, a current web application would, as a bare minimum, need to be tested across 3 operating systems (Windows 8, 10 and MacOS), 4 Browsers (Internet Explorer, Microsoft Edge, Chrome, and Safari) and at several screen resolutions. And if you are thorough, you could easily end up with 50 or more different configurations.

As a result of this, people have always been dreaming of ways to automate and scale the process of testing web applications. One of the early test automation success stories was Selenium, and it’s still a major player today. We discussed Selenium IDE in a previous blog. Selenium IDE is great for creating simple tests, and, coupled with Selenium WebDriver, it can be a great tool for regression testing.

In this blog, we will look Selenium Grid, a tool that allows you to run multiple instances of WebDriver in parallel. I’ll remind you of a bit of the history of Selenium, explain about Selenium Grid, explore why it’s such an important part of modern web application testing and then look at some of its limitations.


Most of you will be extremely familiar with Selenium. Jason Huggins (one of the fathers of test automation) created the first version of Selenium in 2004 while he was working for Thoughtworks. Selenium was specifically designed to help script and automate the testing of web applications. It quickly became an open source project and additional tools were added to the framework over time. Since then, Selenium has gone on to become one of the most, if not the most, widely-adopted software testing frameworks globally.

The core elements of Selenium are:

Selenese, a domain-specific language called which is designed to define test cases

Selenium IDE, a Firefox plugin that allows you to record test cases and test suites and then replay them (but only in Firefox).

Selenium WebDriver, which coordinates the replay of tests to the browser and can work across many different browsers.

Selenium Remote Control, a tool to allow you to create automated UI tests in any language and run them remotely against any HTTP website on pretty much any modern browser.

Selenium Grid, which allows Selenium test suites to be coordinated and run across multiple servers in parallel.

The development and integration of all these projects is coordinated by Selenium HQ.

Selenium IDE and WebDriver

Selenium IDE provides a really elegant way to capture and record test cases via pointing and clicking within a browser window. This is great because it accurately replicates how a real-world user will be interacting with the application. Clearly, as with all UI testing, a good deal of planning has to be done before you record each test case. Selenium IDE is particularly suitable for rapidly prototyping regression and smoke tests. Selenium IDE can also be used to replay the test(s) you have recorded in Firefox. More often though users will choose to export their tests and suites in one of several formats including C#, Java, Python, and Ruby.

Although it has playback capabilities, Selenium IDE is somewhat limited by the fact it only functions as a Firefox Plugin. This is where Selenium WebDriver comes in. Unlike IDE, WebDriver can replay your recorded tests in almost any browser. Currently, WebDriver is compatible with the following browsers:

Google Chrome

Internet Explorer 7, 8, 9, 10, and 11 on appropriate combinations of Vista, Windows 7, Windows 8, and Windows 8.1.
Firefox: latest ESR, previous ESR, current release, one previous release
Android (with Selendroid or appium)
iOS (with ios-driver or appium)
Because of this, WebDriver is really well suited for running regression tests. Furthermore, WebDriver can also replay tests that were manually created/edited in Selenese.

However, Selenium WebDriver and IDE suffer from two big issues. Firstly, selectors are very narrowly defined during the record process. This has the effect that even quite minor changes to the code, or a minor change of web framework, can render all your test cases useless. Secondly, both run as a single instance, and at any one moment, the server can only run tests on one combination of OS/browser/screen resolution at a time before having to be reconfigured. A good regression test suite has to be run across the full matrix of different server/OS/screen resolution combinations, and nowadays, that often includes testing the responsive mobile version of the application too. This means large regression test suites can easily take many days to complete using WebDriver.

Selenium Grid

Enter Selenium Grid, one of the youngest members of the Selenium family. Selenium Grid allows you to coordinate multiple instances of Selenium running across a number of different servers. Essentially, it provides you with a way to create a distributed test environment for Selenium. It works by distributing the tests across the available number of servers (nodes). This will speed up the execution time roughly linearly compared with running on a single machine. So, if you have 10 machines it will take roughly 1/10th the time to complete your tests.

This makes Selenium Grid a powerful tool for regression testing, where you are keen to expose your code to as many environments as possible but need to minimize the time taken. This also makes it a powerful tool for smoke testing new builds across a standard set of platforms and environments, helping speed up the cycle of Continuous Integration/Continuous Test.

Another benefit of Selenium Grid is it can be used to ensure you make better use of your test infrastructure. Rather than manually setting up and reconfiguring servers as singletons, Selenium Grid can be used to automate this process and so ensure your infrastructure is better-utilized at all times. This will also reduce the burden testing otherwise places on your DevOps/sysadmins.

When you set up Selenium Grid, one server is appointed as the hub. Server nodes are connected to the hub, and the hub maintains a list of which browser instances are available on which node. When a test (or test suite) is run, it will request the specific browser instances it needs. The hub supplies a list of appropriate nodes, and the test is distributed among them.

Currently, the Selenium HQ team are in the process of deprecating the original version of Grid (imaginatively called Grid 1) and are only supporting Grid 2 in future. The original version of Grid only supported scripts written for Remote Control but was actually a completely different server and setup. Grid 2 is now bundled as part of the main Selenium Server install file and also supports scripts written for WebDriver.

Limitations of Selenium Grid

While Selenium Grid is a really useful tool, it does suffer from some real limitations. These make it less useful than it might otherwise be. Firstly, it has relatively poor scalability compared with many modern applications. It certainly is unable to scale up or down on demand. Secondly, it is pretty static. Each server is configured in advance with a subset of the required browser instances available. If you want to vary this, you have to reconfigure. Thirdly, although it is able to be run on virtual servers, it isn’t actually designed as a cloud-native application. As a result, it isn’t optimized to take advantage of things like distributed storage, dynamic scaling, and automatic failover.


Selenium transformed the process of UI testing web applications by introducing the ability to automate tests. Over time, Selenium has developed to encompass a whole family of test automation tools. Selenium Grid is the newest of these, and in its current iteration, provides a powerful way to distribute your tests across a number of servers. However, as we have seen, it suffers from drawbacks. Some of these are particular to Grid, but some, like the static nature of descriptors, are inherited from the original Selenium.

So, what could be done to make test automation better? Well, the obvious thing would be to create a cloud-native test automation tool. This would be able to dynamically instantiate or delete test servers as needed. Given how relatively simple a UI test server can be, it would be an obvious candidate for containerization. Test scripts could be coordinated from a central location but accessed from anywhere. Test results would be made available anywhere they were needed.

In the ideal world, artificial intelligence would be used to improve the process. For instance, using Machine Learning you could develop a system that automatically worked out when it needs to trigger additional tests (for instance, if a particular smoke test failed). AI could also create “fuzzy” selectors that didn’t rely on a perfect match. Sound like a bit of a dream? Well, it’s a dream that Functionize is rapidly making a reality!
Add a comment...

Cultivating the Right Mindset for Successful Test Automation

Test automation has an indisputable number of benefits, but most automation initiatives head towards failure because the team doesn’t prepare or plan. Building a test automation strategy that will effectively support the needs of your team and business goals can be quite challenging. For many teams, traceability issues, cultural inhibitors, and scripting language skill sets are only some of the tall hurdles that block the way forward.

“The problem is not that testing is the bottleneck. The problem is that you don’t know what’s in the bottle. That’s a problem that testing addresses.“ — Michael Bolton

Software is becoming increasingly complex, and there are many tools that promise to help you keep pace with this complexity. The future of testing automation is being built upon a foundation of computer vision, machine learning, and self-diagnostic/self-healing frameworks. Yes, indeed, tools can be immensely helpful—in the right hands. Equally important is how a testing culture continues to pursue and implement best practices that align with modern technologies.

Scripting requires experience and skills

For any product development team that wants to pursue conventional testing automation, it will bog down if its QA teams do not acquire and maintain coding skills for writing automation scripts—which entails scripting frameworks and languages. Within most scripting languages you’ll find at least one testing framework which requires additional expertise (such as pytest for Python and TestNG for Java). While this affords flexibility and opportunity, the additional complexity must be handled properly if a team hopes to achieve success in its test automation strategy.

Ideally, a testing team should include some members who take interest in learning new skills and then convey those skills to the remainder of the team. Programming skills can be especially useful. Some testers will have the enthusiasm and aptitude to learn the fundamentals of a general use language such as Python, which is applicable across many scripting languages. Another benefit is that testers who become proficient in coding practices can communicate much more effectively and fluidly with developers. It’s important to realize that new solutions are available which can largely replace scripting in your testing environments. Functionize offers an entirely different—yet highly effective—alternative to scripting.

The problem with tossing it over the wall

For many companies, any testing that is transferred to an entirely separate team is likely to be done manually. Many attempts to automate an over-the-wall environment often boil down to end-to-end testing that is quite cumbersome to maintain. Because end-to-end tests require an environment that closely reflects the environments of the end users, it isn’t practicable to isolate specific features and components. Testers will naturally return to manual testing to avoid constant updates to brittle end-to-end test code that breaks with most feature changes.

An isolated tester rarely has the capability or gets insights that could help with testing deeper within the software. Unless the testers have coding skills or working in pair testing teams, they don’t get unit testing experience—but this is where much of the effective verification happens. Unit tests isolate the small elements of a software system and verify the correct function of all those parts. To test effectively in this capacity, it’s necessary to have a solid grasp of the code itself. Typically, it’s the developers who do this.

Maintaining responsibility

Here’s another dynamic to testing culture: when testing responsibility is diverted away from developers, they may lose incentive for taking full responsibility to ensure that the software actually works. This can erode trust, especially if there is a rush to implement new features. There will be a lengthening queue of features that await verification, and days or weeks may pass before the testers get an opportunity to examine the code and give some sort of critical feedback. A typical evolution of this testing separation and deferment will likely lead to additional ranks of testers that verify and re-verify until everything seems OK, and the entire fragile arrangement can come to a dead stop.

Minimizing handoff

Reducing the extent of handoff is essential to testing efficacy. A single team must take the responsibility for testing, supporting, and delivering systems. Members of the testing team can work closely together with developers. A team member can assume the role of both tester and developer to cross-verify the work products and minimize bias. Instead of developers performing some unit tests and then handing off the remaining testing work to a separate test team, the entire team can collaborate and decide together what type of testing is most suitable to a specific product development scenario—unit testing, exploratory testing, or automatic end-to-end testing.

It’s best for a testing team to narrow its focus to the release of a small set of related features at a time—with everyone on the team working together to enable seamless development, testing, and release of a single feature set. Instead of merely accepting the inefficiency and queuing of throwing it over the wall, this single team can easily maintain focus on a considerably smaller feature set that will release within a two-week duration. Narrowing the focus this way also contributes to higher productivity for reasons that go well beyond internal testing. Delivering smaller sets of features to customers should result in much quicker customer feedback. Testing and staging environments are easily configurable —and reusable—for hosting a smaller set of changes. This tightens the focus of external customer reviews to be shorter in duration, higher in clarity, and very specific.

Coming to see the value of automated testing

A team that takes on the responsibility for developing, testing, and supporting a system might take awhile to realize that comprehensive manual testing will be woefully insufficient as the software grows in complexity. Ideally, it is management should take the initiative and ensure that they learn about the benefits of automation. Without any guidance, automated testing may take years to happen organically. When the team does realize the need for it, many years may be necessary to reach a level of proficiency. This is easily avoidable by finding the right expertise and the right toolsets to clearly demonstrate the value proposition.

The only way to tangibly demonstrate the value of testing automation is to configure and implement it. The team members must get their hands dirty to acquire the confidence in the work for which they will actually be responsible. A highly effective way to accomplish this is to provide the opportunity for the team to get exposure, perform the techniques, and then apply them immediately to real software that is challenging to test. The only way to achieve an effective, enduring transformation to testing automation is to make it real by solidifying the practice of it.

Direct experience leads to adoption

Adoption of test automation is much more probable when team members actually realize and experience the following:

Test-driven development improves the speed and quality of software development.

Confidence increases through hands-on implementation automated testing of complex systems.

Some automation tools can greatly minimize the burden of test automation.
Solid test automation readily supports the addition of new application features and the simplifying the testing of existing features.
Automated tests can be leveraged for analyzing the root-cause of existing issues, automatically fix many of those issues, and minimize recurrence of such issues.

Automated tests can easily handle complex testing scenarios that are impracticable to test manually, such as integration with real-time, externals systems.

Wasteful testing. Some testing scenarios are not productively automatable. Learning what types of testing should remain manual or exploratory will help clarify the value of automated testing in all the other scenarios.
Impose few mandates

If a development team manager imposes mandates, it’s likely to result in the generation of additional tests. But, there is no assurance that those additional tests will be of any use. Tests that do not add value actually waste effort and increase the number of assets to maintain. Over time, useless tests can cause confusion, since tests eventually become rather inflexible assessments of how the software or system should function.

It is best to avoid these types of mandates:

A specific amount of code coverage, which is the measure of all the code that is covered by test cases. Coverage of 80% means that 20% of the code never executes when the test suite is run. Mandates for code coverage lead to rushed, wasteful tests.

While code coverage may be a good approach for finding untested areas, it doesn’t assure any particular level of quality. Test case efficacy isn’t measurable by enforcing code coverage to a particular extent. Also, there will always be functional areas that are not worth testing. So, don’t mandate complete coverage.

Mandating that tests should be written before code is ready—which is test-driven development. Yes, TDD is an excellent practice, but should not be mandated universally. Some software features/functions are too challenging or do not benefit from TDD. It’s much better to let team members use good judgment and apply TDD where it’s sensible to do so.

Mandating a minimum number of tests, which does nothing for testing efficacy. Keep in mind that it is often quite beneficial to eliminate wasteful tests.
Such mandates, imposed on testers that are typically overburdened, will negatively affect the maintainability of the system and produce a number of useless tests. Whenever possible, share the responsibility for deciding how and what to test, clearly explaining the value of specific test cases, and solicit feedback.

Be strategic, not tactical
While automation tools have been in existence for many years, many teams have struggled to implement a truly successful comprehensive testing automation apparatus. Success requires deliberate and careful planning that has full support from company management. Adequate, enthusiastic resources are a must. It is important to view the automation efforts as a critical line of investment—with clear priorities and solid process definition. Measure progress throughout the initiative, with tangible metrics that demonstrate that goals are achieved. If you persist—and properly cultivate and nurture the effort—it is probable that your automation infrastructure will mature and expand into a system that is scalable, robust, and maintainable.
Add a comment...

Part 2: AI’s impact on Business | Visual Intelligence

In the first part of this blog, we explored what AI is, looked at some history and some of the applications of AI in business. I left you with the thought that the application of AI to vision is one of the big areas for growth. This blog will explore this in more detail and includes links to cool things that have been done recently.

Visual applications of AI can be split into three: machine vision, the creation of artificial scenes and the visualization of big data. Each of these uses different techniques and they have different applications. But before we go into detail I want to give some more background on neural networks, especially Convolutional Neural Networks (CNNs), which are widely used in computer vision and AI applications.

Neural Networks

Typically, neural networks are 1-dimensional constructs. They take a vector of input data, transform it via one or more interconnected hidden layers of neurons and give a vector of outputs. At each stage, nodes sum their inputs with different weights and send that sum on to the next layer. The final outputs sum to 1 and represent the probability that the input matched a given pattern. During the learning process, the weights are adjusted using a method called back propagation. The aim is to get the outputs as accurate as possible.

In computer vision, the network may be trained to recognize hand-drawn numerals. The input is a simplified version of the drawing (reduced to something like a 32×32 matrix and then rasterized into a vector). There would be 10 outputs reflecting the probability that the input was the numeral 0, 1, 2, etc. In the ideal world, a numeral 3 input would give an output of [0,0,0,1,0,0,0,0,0,0]. But in practice, the probability will never be as high as 1.

In a Convolutional Neural Network, the input picture is divided into small pieces. Each small tile undergoes a convolution operation (a form of 2D filter). The results of this convolution are then passed through one or more Pooling layers. These further simplify the result by collapsing each square to a smaller square using a simple function such as max, min or average. The example below shows max pooling with a stride length of 2. This divide, convolute and pool process may be repeated in several stages until you have extracted the minimum feature set that identifies the item you are looking for.

Because CNNs operate in 2 dimensions, they are much better at identifying visual features such as edges or shapes. For instance, when they are used to perform the hand-written digit task described above they will function much better because they can rapidly identify features like the cross shape in the center of a figure 8. They are also relatively simpler and require fewer neurons to achieve the required accuracy.

Computer Vision

Nowadays we’re all familiar with the concept of computer vision, especially applications like facial recognition or handwriting recognition as described above. However, there are other applications that are less familiar. One of these is called image segmentation. CNNs like those described above are ideally suited to identifying a specific object within a larger scene. But typically, a scene will contain multiple objects. This is particularly true of things like a view of a road as seen from a self-driving car. Image Segmentation is the process of identifying and classifying all the objects in a scene. In the example below from NVidia, the system has been trained to identify several categories of object including cars, pedestrians, street furniture, road, and sidewalk.

Obviously, this technique has direct application to autonomous driving. However, it also has the potential to be applied in other fields, particularly medicine. This is the basis of the deep learning system for skin cancer identification that I described in the previous blog.

Analyzing video

Video is one of the big areas of AI development. As hardware gets faster and as people like Amazon bring online more and more powerful machines like the new C5 and P2 instances, it becomes increasingly easy to do this in real-time and at scale.

One key area of research is to identify and track human figures within video. This allows you to do some amazing things. For instance, by analyzing people’s feet as they walk around a shopping mall, you can recognize their gait and use this to track them from shop to shop. This is now being used in malls to track footfall more accurately. This technique can also allow you to construct “stick-men” figures that follow the movements of people in a crowd. This has real potential for improving CGI in movies as well as for identifying suspicious actions such as pickpocketing/bag snatching.

You can also use similar techniques to identify overcrowding on subway stations. This is difficult since the video feeds of the platforms are usually severely foreshortened, making it hard to identify how densely packed the passengers are. By identifying individual heads and their relation to known landmarks on the platform, the system knows whether passengers simply need to be moved down the platform, or whether to prevent more passengers entering the station until the situation improves.

Image construction

One of the exciting new fields is that of using AI to construct artificial images. There are a number of approaches to this. One approach is to take a segmented image like the one above and artificially fit in pieces that fit the segmentation. Another really powerful approach is using Generative Adversarial Networks (GANs). Without going into detail, these are able to construct incredibly accurate artificial pictures by combining features from multiple input sources. The two photos below are an example of this. While many of you may feel you recognize these actors, they have in fact been created artificially from a database of celebrity photos.

Clearly, this approach has application in things like computer gaming (hence NVidia’s interest). But it could also be used to significantly enhance the world of augmented reality (where artificial content is overlaid onto the real world).

Another interesting application is converting text to images. On Google image search you can input something like “white and pink flower with petals that have veins”. That would return lots of results where images have been accurately labeled, but it will also contain quite a few random other pictures. But if you gave the same instructions to a GAN, it would artificially generate results like this:

This has potential in many fields, and it could be used for more than simply constructing images. It also opens the possibility of constructing videos simply by describing them and of allowing new approaches to industrial design.

Data Visualization

If you’ll forgive the pun, Big Data is Big Business. But Big Data is useless without good analysis and visualization. This is where AI can come in. Let’s take a relatively simple example. As you know, when you take a photo with a smartphone, it is geotagged, so you can see where the photo was taken. If you use an image hosting site, then all those photos have been uploaded to a central location. Imagine if you could take the geotags from every single photo, anonymize them and plot them as a heat map.

That looks cool, right? But what can you actually learn from this? Well, it turns out you can use this technique to identify the location of landmarks in a city. Even cooler than that, you can use the same data to do the process in reverse: given a photo of a scene, Google has shown how you can use a CNN to work out the geolocation without needing a geotag.

Other applications

Big Data allows a business to extract previously unknown insights into customer behavior and to visualize information in new ways to improve their Business Intelligence. AI can help here in several ways. Firstly, you can use AI to automatically find best-fit curves for highly complex and large datasets. Secondly, AI can automatically clean and analyze your data. When combined with IoT, this can transform industry, for instance allowing you to identify flaws as goods come off the line. Thirdly, by using tools like Jupyter, businesses can create collaborative AI projects that dynamically display visualizations alongside the code, allowing technical and non-technical team members to work more closely. AI can even be applied to traditionally non-technical industries like farming, combining data from satellite/drone images with sensors on the farm equipment to optimize the productivity of the land.


The use of AI and deep learning to analyze and construct video and stills images is one of the most exciting developments in recent years. It has now become so mainstream that Amazon has released a product called DeepLens, allowing anyone to learn and play with applying deep learning techniques to images. Over the coming months, the expectation is that a huge number of startups will begin to leverage and develop these techniques further.
Add a comment...

Achieve Higher Levels of Maturity in Testing Automation

Achieve Higher Levels of Maturity in Testing Automation
In the age of software development agility, there is more incentive to move toward a continuous delivery model. Continuous Delivery (CD) enables a production-ready software release—at any time. Though it takes time to implement, configure, and remake the team culture, it dovetails well with Agile methodology. When done well, CD significantly reduces the release timeline from weeks to merely a few hours.

“Releasing software is too often an art; it should be an engineering discipline.”

― David Farley, Continuous Delivery

Continuous Integration (CI) is the core of Continuous Delivery. CI has a longer history, and it is easier to implement. The objective is to integrate code into a common repository. During the automatic build process, each code check-in is verified to be compatible and functional with the production code base. This verification serves to detect and solve problems much more quickly—at a much lower cost. Many IT and product teams now depend very heavily on CI across their development pipelines.

Continuous Deployment

Continuous Deployment is an extension of CI, in which code deployment automatically begins the moment it passes all CI tests and verifications. If the verifications are solid, then top-quality releases can move directly to the marketplace as quickly as possible. As you may know from direct experience, CD often doesn’t work out well. This is due to neglect of a vital aspect of software development.

Continuous testing is the missing link

The view of many industry professionals is that software testing has been slow to keep pace with the innovation of Agile development. The problem is not that the testing is inherently faulty, it’s that testing remains an afterthought for so many project managers, developers, and product managers. Most software testing approaches are failing to keep pace with advancements in software development. And yet, paradoxically, the importance of testing continues to increase as software becomes more complex.

If testing is done too far downstream, there is a higher risk that more defects will be discovered and they will be much more costly to fix. Even if developers strive to run manuals tests as soon as practicable, a team that has not implemented a continuous testing framework will find that there is significant, time-consuming rework to be done in each cycle. Many such teams still find themselves running tests after each phase—after the test is built, again after the build is complete, and again after refactoring the code.

Adding to the strain—and time drain—is the fact that more testing is necessary as the software ages and increases in complexity. But, most software companies have limited resources and can’t find the additional time to run more tests. The unappealing options are either (a) to risk some sort of compromise on quality or (b) slip the delivery schedule.

It is possible—indeed quite feasible—to minimize this dilemma by implementing continuous testing and automating most testing efforts. A continuous testing tool/framework monitors for code changes, then automatically executes tests to ensure immediate identification of any exceptions or issues.

As software complexity increases and more features are built on top of the older features, more tests must be configured and executed to ensure the same level of quality. Software teams can’t afford to spend more time—since the delivery schedule must remain intact—and many teams can’t add more manpower to design and run all the tests that need to accommodate the feature additions. A typical compromise is to run only a fraction of the tests. Since many of those excluded tests are critically important, the result is lower quality in the release product. Any CD process that is built as such will eventually fail.

Pursuing maturity in test automation

While it is a fact that automation can significantly increase your testing speed and also increase the scope of your code coverage, there is a risk in attempting to automate too many things—or the wrong things—too quickly. It’s also important to avoid having your best testers doing work that doesn’t maximize their talents and skill sets. When you take such risks and fail, a golden opportunity will have been lost. The setback for the team can be quite detrimental.

“Asking experts to do boring and repetitive, and yet technically demanding tasks is the most certain way of ensuring human error that we can think of, short of sleep deprivation, or inebriation.”

― David Farley, Continuous Delivery

Whenever possible, any agile software development team should prioritize testing automation until it reaches the point at which it is a key concern for everyone. The extent to which the team works hard in configuring a solid, effective environment will directly determine how many benefits it will reap from testing automation.

If your team hasn’t put much emphasis on testing in your CD pipeline, then you’re probably don’t have a CD pipeline! There’s no shame in this, lads and lassies. We all crawl before we walk; we can’t run until we master our walk. Below, we explain how to progressively improve your development process with testing automation so that you can enjoy benefits such as these:

Find issues well upstream, by having developers work with testers to design tests before development is complete.
Involve QA in the build process, and speed up the release timeline by automating most of the testing.

Improve code quality by incrementally automating to the point at which the team is testing everything—on every build.
Minimize risk and gain the confidence that the code is solid on each and every build.

Initial automation steps

At the beginning of your automation effort, there are a number of easy wins that are achievable for the team. Start by having the development team check in their tests and immediately communicate all pass-fail feedback. Failures must be dealt with immediately. When you get the tests to pass, have QA begin building a small set of automated smoke tests.

Initially, it may be easier to execute these tests nightly, but the objective should be to have these running as an integral part of the build process—then automated to run on each successful build. After you reach the point at which these tests pass consistently, the team can incrementally augment the automation suite with more smoke tests. Early on, you can steadily gain confidence by simply adding one small test at a time to the automation suite. Cumulatively, this establishes the foundation of your automation practices.

Gaining confidence in testing automation

When you’ve laid the foundation, you can have QA turn their attention to the various layers of the application. If you are testing an application that will permit the team to separate the integrated and UI tests, you could make the move to distribute these tests and run them in parallel.

“In software, when something is painful, the way to reduce the pain is to do it more frequently, not less.”

― David Farley, Continuous Delivery

If you’re like most teams, then your regression suite is a good place to begin configuring to run multiple layers in parallel. This will reduce the time to receive results and feedback. Much like automating smoke tests, you can incrementally increase the degree of automation across your regression suite—one at a time if necessary—to reach the optimum level of test coverage. As you automate more regression tests, the team can reclaim much of that saved time for exploratory testing and expand coverage out to the edge cases where you are sure to find the truly the interesting bugs!

Level up to quality gates

A team reaches the expert level of testing automation when it moves to implement and automate quality gates, which enable a team to automatically reject any builds prior to advanced testing or move to the release build. The goal here is to quickly provide early feedback so that any necessary root cause analysis can be done as soon as possible.

At this level, the goal is to enhance the configuration such that only just enough testing is done at each stage before you fan out and parallelize your testing in the next stage. It’s critically important to achieving a high level of confidence at each quality gate—before advancing to the time-consuming, costly automation that awaits further downstream.

Leverage machine learning to improve your automation suite
The ultimate in testing automation is to bring the power of machine learning (ML) into this domain. With highly innovative solutions such as Functionize, the time to leverage ML is now. It is truly feasible to leverage ML to create practical solutions to overcome many of the testing challenges summarized here.

Today, it’s really possible to employ machine intelligence to enable both testers and developers to create reliable, repeatable, automatic tests—in seconds. Functionize will automatically write tests for you, run them, verify them, and guide you to infer actionable insights if something goes wrong.

There’s no need to worry since there is no danger that machine learning will obviate the need for conventional software testing. What is more likely is that testing will become considerably more challenging as complex applications are tested automatically by machines. The central challenge will be the difficulty in redesigning or containing application functionality to mitigate the undesirable results that arise from many of the cases that an ML engine runs. Many of those ML test cases will be found to well exceed human ability—and thinkability.

Many testing professionals now approach their discipline deterministically. A quality assurance test will only produce results that a test designer predetermines to be correct— or not. Machine learning overturns all of that since it performs a far deeper examination and analysis. Testing teams will then have to wrestle with a large number of indefinite ML results. This will require a hard rethink about different solutions to the new challenges. Though it brings fresh challenges, there is great value in leveraging machine learning to automate testing to a previously inconceivable level. Are you ready to level up?
Add a comment...

The Future of Test Automation | Part 2 of 2 with Rebecca Karch

Yesterday we shared the first half of our two-part conversation with Rebecca Karch, QA advisor and former VP of Customer Success at TurnKey Solutions, to discuss the evolving role of QA, especially as it relates to continuous delivery. The final half of Rebecca’s conversation will cover:

Today’s test automation challenges
Trends that are shaping the future
The future of test automation
Skills that will prepare testers for the future
Today’s test automation challenges

For complex web-based software and services, which dominates the software industry, the single biggest challenge for test automation is object recognition. Whether it is a programmer writing test software, an open-source test software user, or a commercial test automation framework, reading and interpreting the Document Object Model (DOM) is critical so that each object can be described such that it can be used in an automated test. For services, being able to interpret, or import from, a Web Services Description Language (WSDL) is essential. And, for those focused on API testing monitoring message flows from top-level APIs through to a system back-end or validating side effects like DB updates can be extremely challenging. Each of these things requires skills that are well beyond the typical tester. And each can change quickly as software developers enhance the application, adding new objects with new functionality or updating object definitions and descriptions while they are fixing bugs.

Trends that are shaping the future

While I don’t have a crystal ball, I think that DevOps will continue to grow and the need to bring together development, test, operations, and line of business stakeholders to foster continuous integration, testing, and the rapid delivery of software will become the norm. Organizations are moving away from the centralized QA organization, although the need for testers and the art of testing will never go away in my mind.

I also see a need for a new form of performance engineering emerge, replacing traditional performance testing, as software must have consistently high performance across multiple platforms – including mobile and cloud platforms – with multiple OS environments and large numbers of users. And with the growing popularity of the Internet of Things (IoT) and inter-connected devices, security and usability join performance engineering in importance. And the trend towards having vast amounts of data move between multiple devices, platforms, and OS environments seamlessly is also growing.

The shift is moving away from functional testing, and the testing landscape must change with it. I think that companies who allow their users to test for them will not be successful because their customers are fed up. Companies will instead need to more sophisticated tools that integrate and interoperate seamlessly for each segment of the SDLC. They can no longer afford to test manually except in one-off, exploratory testing type scenarios.

The future of test automation

As the testing landscape is changing, I see a shift away from traditional test automation as a new class of testing frameworks emerge that are revolutionizing the way testing is executed. These frameworks autonomously drive all testing activities. Instead of manually testing the software or writing software to test software, developers and testers can now use their unique skills to train the test framework to do the testing for them. These tools go well beyond conventional test recording tools to perform intuitive testing of highly complex web applications across multiple environments. I recently presented an overview of this exciting new technology at SQuAD which represents the QA community in Denver.

Autonomous test solutions tackle the challenges of object recognition automatically, using artificial intelligence (AI) to continuously learn the DOM, WSDL and/or API. They use machine learning to generate comprehensive tests that ensure coverage for these objects, in context with how the application is used. Likewise, users can upload existing tests automatically triggering object recognition. The more tests that are created or uploaded, the more the framework learns about the application under test.

An autonomous test framework can also generate or upload application data which determines the execution flow for each of the tests. And they can handle large amounts of data, satisfying the need for big data coverage. Execution runs can be launched from a continuous integration tool and executed on a myriad of platforms and OS environments which the framework can be spun up on demand, reducing the amount of hardware overhead needed for the test environment. Naturally, performance is measured as the tests execute.

Moreover, these frameworks have onboard analytics and dashboards to display failure information, run results by release, show performance trends, and detect areas where test coverage is weak. As the application changes, both the object recognition and the test cases can be automatically ‘healed’ when tests are rerun and predictions about future software weaknesses can be analyzed.

I see autonomous testing as being the future of software development, and it’s the only real innovation in the software testing space over the last 10-15 years.

Skills you should start developing now to be ready for the future

I would suggest that testers have a basic, working knowledge of Selenium because of its ubiquity in the industry, and since Selenium continues to be the plumbing beneath many testing tools. But I would caution testers against putting all their eggs in that basket, so to speak, since I think this need to understand Selenium is short-term and testers won’t need to be an expert by any means.

Instead, to be ready for the future, I think testers need to spend their time understanding how their customers use the applications their company develops, how the applications and/or application modules interoperate with one another, and how the platforms on which they are run can impact the user experience. Focus on analyzing risk, understanding different usage taxonomies, and learning proper system-level test design methods like performance, usability, and security is critical because the user experience is what will drive success.

The single piece of advice I would give any QA executive would be to have your testers become an advocate for the user and use an autonomous testing framework to drive the testing for you. Every executive has innovation as one of his/her performance improvement goals. Combining innovation with today’s technology will ensure that your customers are getting software that works every time, protecting your company’s bottom line.
Add a comment...

The Future of Test Automation | Part 1 of 2 discussions with Rebecca Karch

I recently had the pleasure of sitting down with Rebecca Karch, QA advisor and former VP of Customer Success at TurnKey Solutions, to discuss the evolving role of QA, especially as it relates to continuous delivery. As a two-part series, Becky will explore the current state of test automation and how autonomous testing is poised to transform the marketplace.

Over the course of the two interviews, Becky will cover:

The evolving nature of software quality assurance
The current state of testing automation
Testing’s role in Continuous Delivery
Today’s test automation challenges
Trends that are shaping the future
The future of test automation
Skills that will prepare testers for the future

A brief background: Becky Karch has dedicated her entire career in QA/Test and has deep experience with test automation. She has run QA/Test teams for several startup companies and has also led large QA organizations. Becky’s career has included the oversight of testing all different types of software including web-based, enterprise-level, client/server, and embedded systems. Over the past 30+ years, she’s seen first-hand many of the challenges that companies face day-to-day, most significantly the high cost of developing and maintaining automated tests forcing organizations to spend significant amounts of time and money, often reverting to manual testing when automation efforts fail. That experience prompted her to transition from QA Director into the role of Customer Success executive for companies that design, develop, and deliver test automation frameworks that are revolutionizing the software test industry. She works tirelessly to ensure that companies are focused on testing the right things at the right time and finding long-term success with the test automation tools they purchase.

The nature of QA and the constant evolution of software

The QA industry has evolved to a point where open source is king. The notion that “I can’t be the first person to have this problem; let me see if someone else has a solution” is being fueled by the fact that most software is being released more rapidly than ever, and testers are typically under-skilled because of tight QA budgets. This dangerous combination has testers searching the internet for quick, free answers. I say dangerous because, whether they be an automated code snippet or a manual test flow, open source tests are not tailor-made for everyone’s application and often miss the mark. Despite this, Selenium test automation, a “free” open-source framework, has become extremely popular, especially over the past five years (a quick Google search for Selenium Tester jobs results in over 1 million hits) but the Selenium automation framework has a lot of limitations and takes a skilled, expensive resource to use. The fact that software is evolving quickly, employing more sophisticated methods to cover more sophisticated technology platforms, further reduces the usefulness of open source solutions to the point where “free” is not really “free.” When I talk to testers and QA managers at events like StarEast/West, or at SQuAD (local, Denver-based testing meetup group), most are using open source Selenium for the little automation they’re doing, although manual testing is still largely predominant.

The current state of test automation

Test automation has changed very little over the past 10+ years in my opinion. Two primary test automation methods are being used: scripted/programmatic test development and record/playback automation using an automation tool.

First off, let me say that it makes no sense to me that testers are writing software to test software since the test software can have as many bugs as the software which is being tested, not to mention the fact takes a lot of time and skill (those testers don’t come cheap). Yet this is the method that dominates the test automation landscape. Companies are hiring scores of offshore resources to do this cheaply, but that is fraught with additional problems in that these offshore resources have trouble interpreting the real intention of the tests and are so far removed from the impact of software failures on a business, they have no real investment in the quality of tests they are writing. Furthermore, as application software changes quickly, keeping those tests updated is no simple task; it’s easier to throw away the old test and write a new one. To an offshore team, rewriting tests fuels their bottom line, which is good for them, but it increases cost unnecessarily.

Record/playback technology is about 20 years old. While it offers a fast and easy way to record your tests, it’s only suitable for those paths (and only those paths) that were recorded. When a screen, feature, or path through the software application changes, the test becomes obsolete, requiring new tests to be recorded. Companies that are regulated (e.g., SOX, HIPPA, etc.) cannot simply update or overwrite the recording, which many frameworks allow. I have seen many organizations get tripped up because they have spent too much time and resources organizing tests and cleaning up after each release, especially when new releases are coming at them rapidly.

There are a few other component-based and model-based methods in the marketplace, but these offer little significance since they are too confusing to use and don’t provide reasonable means for measuring test coverage.

Testing’s Role in Continuous Delivery

With the industry’s push towards DevOps and the rapid release/deployment of software, QA teams are under more pressure than ever to ensure the highest quality possible in the fastest amount of time. QA organizations are having a hard time keeping up and the burden of testing, or rather bug-discovery, is, unfortunately, placed on the end-user. In an Agile practice, I hear from testers and their managers that they are taking too much time automating and maintaining tests to the point where developing software to test software becomes part of each sprint’s technical debt, creating an enormous backlog that slows down the software delivery. This debt becomes so large that organizations are opting for manual testing to get some level of testing coverage before they release the software. Furthermore, testers are spending the vast majority of their time on functional testing, focusing only on a few specific new features or changes. But spending all your time testing any single module’s functionality is dangerous since the typical user experience spans many modules or applications which need to work together seamlessly with adequate performance under heavy load. It’s sad to say that as consumers, we’ve all seen websites crash, online shopping carts that don’t work correctly and have had our personal information hacked or compromised in some way. These so-called ‘software glitches’ are not typically things that can be found by functional testing. But the hit to a company’s bottom line is real – lost or disillusioned customers have a long-term cost to a business. And, all fingers point back to QA.
Add a comment...
Wait while more posts are being loaded