Hybrid vs. Autonomous Engines - What’s Better for Development?
With the advent of autonomous vehicles, automated testing is becoming more important than ever. However, there are several approaches to test automation.
Ninety-five percent of new vehicles sold will be fully autonomous by 2040. A decade ago that prediction seemed unimaginable to the everyday consumer, but it is becoming more plausible with products and services that have intertwined artificial intelligence (AI) into our daily lives. The Society of Automotive Engineers (SAE) describes the changing role of the driver to driverless vehicles in six levels up to complete autonomy. (See Figure 1)
The first three stages are focused on monitored driving, where the driver must actively analyze the environment, and the last three stages are focused on non-monitored driving, where an autonomous driving system analyzes the environment. Transitioning from these two disparate approaches requires a change in regulations, infrastructure and mindset. Most importantly, it demands a guarantee that the latter, technological approach is just as reliable and accurate. (To learn more about the SAE's different levels of automation, see Driverless Cars: Levels of Autonomy.)
Figure 1: The SAE Levels of Automation Marking the Transformation from No Automation to Complete Autonomy
Source: Adapted from Mike Lemanski
Hybrid Engines Must Exist to Be Successful in Today’s Autonomous Chasm
Self-driving cars entering the market will need hybrid engines that provide the ability to switch from a manual to self-driving mode, allowing humans and machines to work together, giving drivers a feeling of security. This need for security when leveraging AI not only applies to the automotive industry, but also software development. The SAE’s framework from monitored to non-monitored driving has highlighted a widespread autonomous chasm where humans and machines must work together to provide assurance in products and services across industries.
Humans and Machines Must Work Together to Develop Scalable, Stable, and Secure Applications
Embracing total autonomy requires the combination of human and machine to ensure scalable, stable and secure AI-powered systems. Software teams that leverage hybrid automation testing tools that combine property-based and AI-powered visual recognition will not only realize performance improvements, but also achieve quality at speed when ensuring an application functions properly through test coverage. For a test automation engineer to ensure security behind their application requires the combination of two separate, but related, dimensions. (See Figure 2)
- The first, level of accuracy, is a gauge in stability behind automated tests and ease of test creation from recognizing diverse application components.
- The second, ease of maintenance, is a gauge in scalability behind automated tests and ability to easily maintain test scripts after application updates, enhancements and fixes.
Four Types of Test Automation Engineers
There are two dimensions when humans and machines work together to develop test automation form four different types of software testing roles. Testers in the bottom left are Traditionalists. These testers do very little with automation and primarily conduct manual testing, increasing maintenance efforts to achieve coverage. They may be unaware of opportunities in automation or may be making small investments without effective executive sponsorship or top-led transformation in place.
Automation engineers in the bottom right are Maintainers. They rely on conservative, prudent measures to ensure test automation scripts are accurate. Maintainers understand the need for expansive test coverage, but can be skeptical of an approach that is not individually programmed to their application properties. Their careful approach toward test automation fosters stability and accuracy, especially when testing between highly similar objects, but can be difficult to maintain against dynamic properties. (For more on automation, see Automation: The Future of Data Science and Machine Learning?)
Testers in the top left are Visualists who have adopted visual testing capabilities that are often powered by artificial intelligence. AI in software quality enables automated tools to access application properties often missed by standard object recognition techniques. For example, by capturing UI elements at a textual level with AI, software teams can ease maintenance for dynamic properties and broaden coverage for complex consoles, data visualization tools and PDFs.
Automation engineers in the top right are Hybrid Masters. They truly understand how to drive value with stable and scalable testing approaches to deliver maximum test coverage and security. They combine a high level of accuracy through manual scripting of test scripts with high ease of maintenance through AI-powered testing to benefit from each approaches’ strengths: stability and accuracy from property-based recognition and testing of the former, and the speed and scalability from AI-powered visual testing of the latter.
Figure 2: Four Types of Test Automation Engineers
Hybrid Masters Surpass Performance Metrics
Software quality teams that either have extensive property-based recognition or visual testing frameworks will outperform manual testers in time savings and coverage. Hybrid Masters will surpass performance metrics even further when a combination of approaches is leveraged. What comes simply to property-based object recognition such as identifying subtle differences between two images, for example, can be tricky for machines, and what is straightforward for machines such as analyzing gigabytes of data or translating images to computer readable content, remains very difficult and time-consuming for humans. Testing desktop, web or mobile applications requires both kinds of capabilities.
The advancement of technology is gaining speed, and software teams in every industry are inquiring about its implications to produce highly stable, scalable and secure applications. Teams face a spectrum of alternative approaches and tools to tackle development and performance goals – sometimes inadequate and often hard to use. Infusing AI with traditional automated testing tools creates a hybrid engine to easily detect and test any UI element for maximum test coverage.