Do we really need manual testing?" Today, nearly every enterprise technology leader faces this question from all corners. In the age of automation, manual testing is perceived to have lost its charm. The Automation Testing Market size exceeded USD 15 billion in 2020 and is anticipated to grow at a CAGR of over 16% from 2021 to 2027.
Many leaders question the ability of manual testing to work effectively for cloud-based application development or product-led development strategies. In addition, many complain about biased test behavior, which paves the way for bugs to creep into production.
However, if you are a wise technology specialist, you will know that manual testing is not a completely obsolete practice. In fact, the role of manual testing in the software development process is still significant, even with end-to-end automation testing strategies already in place.
To that end, this blog covers
What is manual testing?
The importance of manual testing?
Why should manual and automation testing co-exist?
What is manual testing?
what_is_manual_testing-ACCELQ
In simple terms, manual testing refers to the practice of validating software for its performance and functionality by human test engineers. It is a practice that has been around ever since software development made its way to the mainstream.
In the initial days, developers themselves played the role of test engineers. However, over time, the importance of quality assurance rose significantly, and dedicated test engineers became a critical part of every software development activity
Why is manual testing important?
Here are the top five reasons manual testing is vital for today's software development initiatives..
The human advantage
We are arguably at a revolutionary point when artificial intelligence and automation have matured into levels that often challenge human cognitive power.
Nevertheless, from a contextual and awareness perspective, the human brain is still one of the most intelligent computing infrastructures known to man.
Software testing, too, can be a use case in this regard. And we can rely on test automation tools and platforms to accelerate testing activities to a great extent.
But there are areas where the human eye for detail can pick out abnormalities or suspicious behavior that software testing tools may miss out on. This is most notable in the case of UI-testing or usability testing and acceptance testing from an end-user perspective.
Bringing onboard automation in acceptance testing will save you time, but there are chances that the tool may not pick up visual issues. Manual testing allows better mimicking of end-user actions. As such, previously undiscovered problems are more likely to be identified this way.
Feasibility of test cases
Not all test cases will be an ideal scenario to rely on automated end-to-end testing. For example, consider the situation of testing features of a mobile app that relies extensively on leveraging the tap motion on the smartphone screen.
There are ways to automate the usability testing of tap functionality on the screen. But only a human touch can confirm whether the app responds to all possible response intensities of the touch scenario.
In addition to such technical complexity, some test cases may be very costly when trying to automate. For smaller projects or initiatives, organizations will not encourage incurring high testing costs.
In such situations, manual testing gains significance owing to easier adaptability and better feasibility for the test case.
Erroneous scripts
Test scripts are ultimately lines of code. There are chances of errors making their way into the test script development activity. Consequently, it may result in scenarios where automated testing reports false positives or false negatives.
In the case of false positives, the test automation tool may report the presence of a bug or defective condition even if it was not present. For false negatives, the opposite happens, and an existing bug may not get reported.
Either way, businesses cannot rely solely on test automation for validating software that handles critical operations. They need an extra layer of surety and assurance.
Manual testing guarantees this second layer of trust. As such, it must be an irreplaceable part of critical enterprise application test strategies.
Cost-efficiency
An enterprise may not always carry out massive application development initiatives. Most of the time, the technology team will be rolling out minor features or small improvements and enhancements.
Since the scope of coverage of the test is minimal, manual testing will be more cost-effective. To that end, it will help enterprises save on their IT budgets annually.
Agile projects require constant rewrites
Today, most enterprises leverage a product-driven approach for their application development initiatives. Applications are built and deployed on the cloud, following agile development philosophies.
However, there will be a constant cycle of feedback incorporation and deployment when working in agile mode. At times, a sprint may include drastic changes in UI or the addition of major functionality. Over the lifecycle of the product, such instances may be numerous.
In such scenarios, automated regression testing and even smaller test automation scripts may require constant changes. This may seriously impact costs and timelines.
Hence considering manual testing options for relatively smaller iterations in an agile development environment makes more sense.
Better problem understanding
It's invariably essential to understand any problem from a technical, emotional, and cognitive perspective. That's because it will prevent such bugs or defects from popping up. Furthermore, it will encourage teams to develop best practices for the development stages.
Favorably, manual testing enables a better understanding of the problem from an end-user perspective. It serves to justify the vitality of human emotions and thought processes for making rather complex decisions.
Why should manual and automation testing co-exist?
manual_and_automation_testing_coexists-ACCELQ
In the digital economy, enterprises rely on numerous digital applications for their daily needs. Besides, they follow different development paradigms and leverage diverse infrastructure like the cloud to run their business better.
As a result, relying purely on automation testing may not prove entirely feasible. In other words, a strategic alliance of manual and automated testing must guide the quality assurance for modern digital applications.
Leveraging the best of both worlds' enterprises can guarantee long-term results and ROI on their digital investments. As it stands, for most enterprises, their internal testing teams would be well versed in manual test strategies and execution.
What they need is a powerful end-to-end test automation platform that can empower them to work better and assure the highest standards of quality for all enterprise applications.
What they need is a powerful end-to-end test automation platform that can empower them to work better and assure the highest standards of quality for all enterprise applications.
This is where ACCELQ can help make a difference. Get in touch with us to explore how our continuous test automation platform seamlessly complements your manual testing strategies
Developed as an agile development technique in the late 1990s, test-driven development (TDD) presented itself as an ideal solution for limiting bugs, reducing bug-fixing costs, and accelerating software agile software development.
A study by Microsoft and IBM dating back to 2008 pointed out that initial development with TDD incurred 15-35% more development time than conventional techniques. However, the study’s authors theorized that TDD pays off in the long term by reducing debugging time and encouraging incremental and iterative development.
Fast forward to today, and TDD is, well and truly, driving agile software development. This article serves as an introduction to what is TDD and how it benefits businesses in end-to-end product development.
An overview of TDD
TDD meaning Test-Driven Development. The most basic premise of TDD is to write a test case for a simple, functional component. This test is generally a failure test of the most straightforward function that requires implementation.
Subsequently, you write the most basic code to accommodate the test case scenario, i.e., pass the test. This way, the code is “refactored” to suffice the required functionality.
This means that TDD supports writing test cases first and adjusting the code accordingly. Thus, in essence, tests drive the code, not the other way around.
The many business benefits of TDD
TDD has grown to a point where it is now a mainstream development practice. It’s only natural for businesses to convert to this practice. That said, it helps derive a plethora of business benefits.
Benefits of TDD-ACCELQ
A focus on requirements right from the start
As elucidated above, TDD is about writing tests first and everything else second. As a result, this practice helps capturing requirements right at the beginning of the project. This ensures that the code is written in a manner that:
It does not contain any underlying assumptions about the critical functional requirements
It entirely focuses on the outlined test case scenario
The coding phase is inevitably much more refined and business-focused with this as a baseline.
Reduced programming sophistication
When serving varying demographics, your application incorporates a certain degree of complexity. And the underlying architecture and code handle this complexity.
But with TDD, such sophistication doesn’t come in the way of the test creation process. Moreover, the extensive focus on test cases ensures that the code evolves structurally. It’s noteworthy that such orderly evolution in code would also result in a more refined and stable architecture.
Consider this; the developers need to work through a simple process whenever a new function is added:
Write the test case scenario
Prepare the code to pass the test case
Refactor the existing and new code to make it more stable
Documentation for cross-functional teams
Of course, TDD accrues several benefits for both developers and enterprises. But what about documentation? During tests, developers are drawn to the test cases and code in a specific sequence. This ensures that they are focused on their task of writing the most basic code.
But at the same time, all the test cases combined provide a comprehensive description of the feature. Thus, TDD does not require any separate code documentation, at least in terms of what the technical team needs to comprehend the feature.
Naturally, the documentation must be much more consumable for a more layman audience. TDD gives a head start to such teams because of the functional simplicity involved.
Since the documentation is organized right off the bat, it’s relatively easier for anyone from the cross-functional team who’s joining in to collaborate or perhaps replace someone absent.
On that note, TDD serves to:
Reduce the feedback time from cross-functional teams
Expedite better design and code architecture decisions
Increase the likelihood of delivery of better-quality products
Automation and test coverage
TDD’s principle of writing automated test scripts to help drive agile development for quality assurance is a compelling aspect of the practice. The unit tests created through TDD enable much better results for later verification.
Because each unit test leads to code creation and ultimately refactoring, TDD expedites a 100% test coverage. This effectively translates to a better bug-free code. Considering how tough it is to track down bugs and their corresponding fixes, writing such a high coverage of test cases can go a long way in helping the QA team come up with a clean slate.
Rapid change accommodation
Agility, flexibility, scalability, and adaptability are all pretty much the foundations of agile development and the associated TDD. With TDD, developers have thorough knowledge and documentation of each of the specific functionalities that the application must offer.
This goes a long way in refactoring on-demand, i.e., if a new function is added to the application, its corresponding unit test will come into existence. Subsequently, the developer will update the code to pass the test.
As such, TDD helps accommodate any immediate changes or additions required for any of the existing functionalities.
Customer-facing methodology
Nothing’s more important than getting the right feedback at the right time. And to make this happen, TDD supports frequent feedback on the development and, on a holistic level, the business needs.
The whole process of writing functional test cases and the resulting refined code is a swift way of getting feedback on the most basic must-have application functionalities.
As a result, each of the hypotheses in building and refining the application is validated right away. It’s, thus, safe to assume that the application has a higher chance of eliciting positive feedback from the end-users right off the bat.
In a Nutshell - TDD Cultivates Agile Teams
As seen above, TDD centers itself around the development of automated tests. With the creation of such tests, it’s easier for developers to work through a repetitive process.
The resulting automation in this process helps a team focus on the development at hand rather than on the larger technicalities. The focus on the development is further complemented by the team’s ownership of the code and its subsequent evolution. This all helps to create a high likelihood that there’ll be no bugs or missing features.
Best of all, TDD helps teams evolve around a standard methodology rather than just being a part of the normal development process. In this case, it’s up to the developers and businesses to ensure that the functionality and the associated code evolve in an orderly fashion.
Across several technology teams, the test strategy vs. test plan debate is becoming a major challenge. As more businesses drive their key growth via digital channels, ensuring that applications exhibit the highest quality standards is of prime importance.
To that end, enterprise tech leaders must thoroughly know the facts in the test strategy vs. test plan debate. In short, they must clearly distinguish between test plans and strategies at all levels.
However, if the business's core area of focus is non-tech, it's likely that testing strategies will not be of high priority from a leadership standpoint. As a result, the difference between test plan and test strategy will not be effectively propagated amongst teams. Eventually, it'll result in more challenges.
resolve the test strategy vs. test plan-ACCELQ
Why is it important to resolve the test strategy vs. test plan debate?
The very definition of test plan and test strategy is the primary reason all stakeholders must be fully aware of their differences. Let us consider both as documents to have a clearer definition.
A test plan is a formalized and focused document prepared from the SRS document that outlines the areas to be tested (scope) and the different activities to be done while testing. An enterprise will create the test plan for every software development project it undertakes.
On the other hand, a test strategy is a high-level guidance document that lays out the approach that an organization must take in any test activity. It focuses on best practices and essential traits of any software that must be evaluated with respect to organizational objectives.
In addition, the test strategy document encompasses information such as the following:
Type of testing to be followed for different scenarios
Tools to be used
Test environment configuration practices
Release control practices
Risk mitigation procedures
Roles and responsibility assignment practices, and much more.
A test plan needs to derive its content from the test strategy. This is the fundamental difference between the two. Based on a project, the test manager can prepare the test plan and assign roles and responsibilities to different people based on guidance from the test strategy principles.
Unless the test plan follows the guidelines laid out in the test strategy document, there's a high probability it can compromise the enterprise's ambitions.
What are the key differences between a test plan and a test strategy?
differences between a test plan and a test strategy-ACCELQ
If developers, test engineers, and other stakeholders do not have clarity on the difference between a test plan and a test strategy, there could be serious repercussions.
To enable decision-makers to make more informed choices, let us explore the five major differences between test plan and test strategy:
Level of usage
An enterprise defines the test plan at a project level. Whenever they plan a new feature addition/upgrade for the application, they consider the initiative as a project — accordingly preparing the test plan.
Contrarily, a test strategy is prepared at an organizational level. It serves as an umbrella document that sets the foundation for every project to build its individual test plan on top of it.
Re-use
An enterprise very rarely uses a test plan, given its preparation for every individual project. They can re-use some parts or ideas but using the exact replica of a test plan is not advisable.
A test strategy is a long-term document. Enterprises use it across multiple projects at the same time. In fact, it can be re-used any number of times until the organization conducts an audit and determines the necessary changes.
Constituents
A test plan contains the following:
Test plan ID
List of features of the application that needs testing
To-follow technique or test approach for each component
Test automation tool to use
Pass or failure criterion, roles, schedules, and more project-specific data.
Comparatively, the test strategy document contains a generic roadmap of the following:
Testing type for each scenario
Test processes in the organization
Objectives to track during any test activity and more generic control statements.
Management
The test manager or test lead in a project prepares and manages the test plan. They take notes from the test strategy and design the most tailor-made test plan for every project.
However, the test strategy requires inputs from project managers or product managers and other senior technology and business leaders in the enterprise. It outlines the overall quality objective that must be followed for every development activity. Hence, typically, senior-level leaders or product/project managers prepare it.
Existence
In essence, a test plan has an independent existence in every project. Since it describes what actions to do and who will do them, it has a prime focus and individual existence.
On the contrary, the test strategy is often a part of the test plan. It has existed from an organizational strategy point of view and is the foundation for defining end-to-end testing practices.
However, from a test focus, the test strategy doesn't have individual existence. Multiple projects leverage it based on their unique test plans at different times.
How can enterprises leverage maximum ROI from their testing strategies?
We have covered the critical aspects of the test strategy vs. test plan debate. Now that the significant differences are apparent, enterprises must focus on achieving maximum ROI from their test initiatives.
In sum, they must prepare and execute a testing roadmap that follows the test plan while at the same time adhering to the organizational test strategy.
But, to achieve this, enterprises need to incorporate end-to-end automation testing to help streamline testing efforts without bias.
ACCELQ's end-to-end test automation platform empowers your teams to seamlessly manage all their testing activities. With zero-coding and end-to-end business process assurance, it becomes easier for software teams to be compliant with the organization's test strategy and prepare the most efficient test plan.
Book a free trial to explore more about us.
In today’s fast-paced world, most companies are looking for methods to reduce products time to market; in-house teams and testing companies are challenged more than ever. As a result, testing professionals need to form a solid approach for continuous testing to keep up with changes in requirements and prompt updates in the product’s operation. Regression testing in agile can help teams concentrate on new functionality and, at the same time, preserve stability at all stages of the product increment and updates. Teams use regression testing to make sure that tested software proceeds to perform well after every modification.
In this article, we will
Understand regression testing
Focus on the concept of regression testing in agile methodology
Points to keep in mind for automated regression tests
Challenges of regression testing in agile automation
The importance of regression testing in an agile environment
Advantages of regression testing in agile
Possible challenges and solutions which can help agile teams reach that high velocity and quality they strive for.
Understanding Regression Testing
There isn’t a single part of the software in reality that’s 100% bug-free. The developer makes changes that may include adding new features, bug fixes, performance issues, etc. While adding new features or fixing bug issues, we can land up creating new bugs, which may influence the performance and functionality of the existing tested software. Therefore, a specific level of functionality must be obtained and sustained through proper testing. This is the reason why regression testing comes into the picture.
Regression testing involves re-testing a portion of the software after the last changes or additions to assure that the product functionality was not changed in any negative way. If performed right, regression testing after every modification can help the teams find any critical bug if present.
Regression Testing in an Agile Context
Agile development is a speedy, adaptive, and dynamic development process. It gives advantages like quicker ROI, faster go-to-market, more express customer support, lessened risks, consistent improvement, etc. In addition, sprint cycles are concise, and developers release new features in record times.
Likewise, testing rounds are also assumed to be compressed to keep up with the development cycle. But in actuality, most of the time, it is not the case. Usually, only a few new features are developed; nonetheless, testing needs to be performed on all newly created and old features to unmask defects that may crop up with new features. With every new build, it is necessary to make sure that the new feature addition or development in the code hasn’t jeopardized the working of the existing workflow. We need to ensure that your user’s experience is only improved by new features and does not increase their work. Regression testing is all about this. We need to ensure that the old code complies with the new code and that the unchanged code remains consistent with every build.
A testing team should create the regression suite from the start of product development for flourishing regression testing in an agile environment. Then, they should proceed with building on its adjacent development sprints.
Points to keep in mind for Automated Regression Testing
Regression tests needs continuous repetition. All release cycles must incorporate this testing to assure newly developed features or modifications do not have anything broken. All the regressions tests are not the reason for a newly added feature or regular bug fixes. The other alternative reasons could be a new browser version, database updates, or other environmental changes. Another possible reason for regression could also be a performance or safety problem. When stable and repeatable regression cases are automated, manual testers can focus on testing various environments and merge complex issues.
Consider the following points when you plan a strategy for automating regression tests:
Don’t try for 100% coverage.
It is slightly hard to manage the 100% coverage of the regression test. Therefore, we should concentrate on covering about 60-80% of tests.
Use your manual tests as an origin point.
To create automated regression testing compelling, we need to perform testing after running all our manual test cases at least once.
Regression Testing Automation Challenges
There are two significant obstacles the team should be aware of that can set back automation efforts:
Maintenance
Agile projects are known to change quickly, so automation regression tests suites need to be maintained continuously to incorporate all modifications in the project and avoid obsolescence of test cases.
False positives
It is a test that communicates in case of a failure, even if the product does not have any issues. This could be because of various reasons like old test cases, timing issues, and other external causes. Other possible reasons may also be created because of poorly designed or poorly coded “flaky tests” that give varying results.
Importance of Regression Testing in Agile Environment
In an agile environment, the organization concentrates on the workflow designed for the sprint. Although the product team attends a particular product workflow, they should consider the risks that might have developed because of their changes to the complete system. A regression test will reveal all the regions influenced by the team’s recent changes across the code repository, pretending coverage is sufficient.
Regression tests should be executed soon after modifications are made; ideally, all the regression test cases suite should run automatically as part of the build process. Later, when the feedback comes in, the team would already execute the changes in different system areas.
Advantages of Regression Testing in Agile
It increases the product’s overall stability and provides a flawless user workflow by testing the product within the sprint cycle.
Enhances the chances of recognizing defects and reducing them, giving us the overall picture of the entire operation.
It helps in developing the overall condition of the product and improves the user experience.
It reduces the task of the developers as it reduces the rework; developers no longer need to rewrite code that might not work as per stipulation due to insufficient testing.
Regression Test Challenges faced in Agile Environment
Several challenges can make regression testing challenging for an agile team:
Frequent changes
Customers and managers at times propose excessive changes that wipe out entire iterations. Any automated regression test plan is at risk at this point.
Cannot use record or playback testing tools
The team needs to wait to use traditional test-last tools with the record or playback features until the functionality is ready. Therefore, conventional test automation cannot be practiced in an Agile environment.
Test growth
Regression testing advances significantly after each sprint, up to the point where it’s no longer feasible to do it promptly. As a result, testers must make an effort to keep the testing at a manageable level by optimizing test cases regularly and eliminating out-of-date issues.
Lack of communication
Agile teams thrive on collaboration and communication. Therefore, effective communication should exist between the testing team, business analyst, developer, and stakeholders. Without effective communication, any changes made to the software may go untested, creating severe difficulties in the prospect.
Specialized testing skills
As the project advances, more specific testing talents are needed. This is true when performance and integration testing is involved. Hence, the team should have a specialized tester from the absolute start or at least have one in the company who can help out when needed.
Test case maintenance
The more test cases that we automate, the more you can check the condition of the current workflow. However, more automated test cases mean more maintenance. If we build test cases too loosely coupled according to the product functionality, they may relinquish even when issues persist. However, if they are too inflexible, they will need to be revised and modernized with every minor change to the system.
Solutions for Regression Testing in Agile Environment
Before building that solution (strategy), we need to assemble some data beforehand.
Gather all the test cases to execute.
Find out all the enhancements that can be achieved in the test cases.
Evaluate the time for execution of test cases.
Outline the scenarios to automate and how?
Building a Regression Testing Strategy
Make use of smoke and sanity test cases
Smoke and sanity testing come before regression testing and can save time for testing teams. Sanity testing is a walk-through of the fundamental functionality of the application before the added testing of a new release, which informally affirms that functionality is working as planned. We require a subset of test cases that test primary and core product workflows, such as register and login, and run very fast to carry out smoke testing.
We can use sanity tests and smoke tests to quickly evaluate if an application is too flawed to justify further testing, such as regression testing. It is better than running regression tests on a product that doesn’t load or allow login and start examining why hundreds or thousands of regression tests fail.
Discovering error-prone areas
Incorporate the test cases that break most of the time.However, some sections in the application are so prone to error that they end up failing even after making a minor code change. We can keep track of those failing test cases throughout the product cycle and incorporate them in the regression test suite.
Test case prioritization
In a risk-based plan, a testing team chooses test cases that cover the application’s most affected areas by modifications in the project. They also arrange them according to priority. For example, regression tests concentrate on product areas with the highest observed risk of quality concerns.
Prioritize the test cases according to significant and generally used functionalities. When we select test cases depending on their priority, you can subdue the regression test suite, save resources time and make it feasible to run regression tests faster and more frequently.
Examining bug reports
Some regression testing tools integrate along with error tracking tools. This helps us, as it allows us to know the results, about the rich data, what happened during a regression test – if it broke, what failed, and specifically which line of code has the bug. Error tracking tools can also support you in obtaining screenshots and other metrics about breakdowns during regression testing, helping recognize and debug the issue.
Communication
Testers should interact with the business team to monitor changes in requirements and evaluate them. In addition, they should communicate with developers to understand which so they know the changes made during an iteration.
Regression testing in Agile is essential. But, to do it right, testers must concentrate on both the efficiency and speed of test cases. Without a solid plan, more problems and challenges may occur during an entire project.
The software development world suffices a host of debates, and the BDD Vs. TDD is one of those that continues to rage on.
Consider this; the world’s first-ever software was written and executed in 1948. This amazing innovation correctly computed the value of 218 in “only” 52 minutes.
Since those early days, software development has come a long way and set a strong foundation for the modern digital economy. As such, it has transitioned from the traditional waterfall approach to an agile approach. Likewise, it has shifted from Test-driven Development (TDD) to Behavior-driven Development (BDD). Or has it?
Since 2018, the inclination towards BDD has increased by 60%. The use of BDD in scripting and test automation is growing rapidly.
So, the facts indeed suggest organizations parting ways with TDD; however, the scenario is much more intricate than it appears. To explore the same, we have to answer these questions.
TDD and BDD: The Similarities
Before exploring the TDD vs. BDD debate, it’s important first to unpack the meaning of TDD and BDD and understand their commonalities and differences.
Both TDD and BDD are testing methods to find bugs and correct issues in software applications. Also, in both forms, test writing happens before code writing.
These similarities notwithstanding, TDD BDD are different approaches to testing.
What is the Difference between BDD and TDD?
Test-driven Development
Test-driven development or TDD means that a developer writes a test for functionality and runs it to see it fail because the functionality doesn’t exist. Then they would write code to make the test pass. Finally, after refactoring code and repeating this “Red-Green-Refactor” loop multiple times, they would have code that creates a complete feature.
The TDD testing methodology is implemented from a developer’s perspective. They write automated test cases before functional pieces of code. The goal is to understand “is this code valid?” If it’s not, the test will fail. Only then will the developer refactor the code, i.e., change it without modifying its key behavior. They then run the test as many times as needed for it to pass successfully.
Behavior-driven Development
Behavior-driven development (BDD) is derived from TDD. Like TDD, BDD is also a “test-first approach.” However, unlike TDD, BDD involves multiple stakeholders, such as developers, test engineers and testers, Product managers, and users, who work together to ideate various acceptance criteria in a user’s story.
Also, BDD takes a different approach to software testing since it tests the system’s actual behavior from the end users’ perspective. Thus, in BDD, tests are written using a “Given-Then-When” behavioral approach:
Given a user does X | When a user clicks on Y | Then display Z
These behavior examples are described in a simple “shared language” such as English or a domain-specific language for writing behavior scenarios, such as Gherkin. The examples are then converted into executable specifications by developers.
The TDD vs. BDD Debate
The TDD vs. BDD Debate
Essentially, the BDD vs. TDD debate boils down to “Which is better? – TDD or BDD, or vice versa?”
BDD and TDD are both testing strategies where the tests are used as part of an automated testing framework. However, they differ in scope. So, before adopting either approach, developers should ask, “What am I testing for?”
BDD and TDD are both testing strategies where the tests are used as part of an automated testing framework. However, they differ in scope. So, before adopting either approach, developers should ask, “What am I testing for?”
Answer 1: The software’s implementation.
Choice 1: TDD
Answer 2: The software’s behavior.
Choice 2: BDD
In TDD, the tests will fail when a functionality doesn’t exist. But as new code is written (refactored), the system will evolve until the complete feature is obtained, at which point the test will pass. In comparison, BDD testing looks at a system’s behavior to understand how it will behave when certain conditions are realized.
Since TDD is developer-focused, tests are written by the developers. That explains why it predominantly focuses on testing smaller pieces of functionality in isolation. Contrarily and as elucidated above, BDD tests an application’s behavior from the user’s perspective, which is why the automated specifications are created by users or testers, not by developers. Moreover, these tests are not written in a programming language but in a shared language that everyone speaks. Test automation engineers then translate into the code under test. Besides, BDD is principally concerned with the results of a high-level scenario, and its goal is to ensure that the output is correct under a given condition.
The Benefits of TDD and BDD
Both TDD and BDD have their advantages and benefits.
Benefits of TDD
Help find bugs and errors quickly
Generates faster feedback from technical stakeholders
Speeds up fixes
Reduces the time required for rework
Boosts developers’ productivity
Can result in cleaner designs and higher-quality code
Helps create extensive, flexible code that’s easy to maintain
Benefits of BDD
Requirements can be standardized and defined in non-technical language that all stakeholders can understand
Promotes an understanding of how the system should behave from the user’s perspective
Illustrates real-world scenarios to clarify requirements
Improves collaboration between technical stakeholders (developers and QA) and non-technical stakeholders (users)
Reduces the effort required to implement post-deployment fixes
Many Agile teams still use TDD instead of BDD since TDD often results in higher-quality code. However, since BDD improves collaboration among a wider audience, it can result in better specifications and ensure that the maximum number of use cases work at a higher level. This can result in greater confidence in the final application.
Ultimately, neither approach is “better” than the other. So, which side of the TDD vs. BDD debate you fall on would depend on considering all these factors.
In a Nutshell
If you are a software engineer, don’t look at BDD TDD in isolation. Instead, understand the benefits of both approaches, clarify your testing goals, and then choose the approach that suits your project, team, and users better.
Errors, bugs, and fallacies are all a part of the software delivery process. It’s the presence of these inefficiencies that ultimately paves the way for an improved release. However, that’s only possible when a sound testing plan is in place, and understandably so.
The term “sound testing plan,” however, fuels a separate discussion around the best testing practices for a particular scenario and application – Should it be a test-driven approach? A load-driven scenario? And what about retesting?
In this blog, we will understand
What is regression testing?
What is retesting?
When is regression testing done?
When is retesting done?
Overview of differences between regression testing and retesting
What is Regression Testing?
Regression testing is the process of analyzing the program code and data files and executing its related tests to ensure that any changes made to the application or software do not break existing features or functionality. It’s an important practice in software testing.
The primary goal of regression testing is to maintain a bug-free environment for users and developers by blocking unwanted regressions when new features are introduced into an application. As such, regression testing should be done periodically even if no bugs were found during earlier stages of development, especially after supplementing any significant new functionality.
Why is regression testing important?
Regression testing becomes vital since it ensures that a new code in one part of a system does not create unwanted side effects in other parts of the system. Furthermore, this helps ensure that new changes do not introduce new bugs in previously tested and bug-free software.
Updates, upgrades, changes in functionality, etc., have become a mainstay in the software development landscape. Much of that is attributed to constantly shifting customer demands. As software begins to touch multiple parts of the organization and becomes business-impacting, it becomes crucial to ensure that new changes do not bring about disruption. Let us understand this with an example.
An Example
Let us run through a scenario here –
An organization discovers a bug in the financial system and creates a task.
The developer creates the fix and runs a unit test. The test proves beyond doubt that the bug fix has worked.
They then release the fix into production, and everything seems to be in order.
Soon after, the organization finds itself unable to run the P&L statements. And the system times out every time they try to run an Aging report.
On deeper exploration of the task when it was first created and the subsequent steps taken, they discover quite a few things. While the unit test worked, this test ran against a small test database that was only sufficient to replicate the error for bug fixing and show a positive unit test result.
The code, however, was not tested using the production data.
The new code thereby created side effects in the system. This problem could have been easily avoided had the fix been incorporated into a system-wide regression test employing a copy of the production data.
Regression testing prevents degradation of the system quality with functionality growth and reduces the defects during release. It enhances software quality, anticipates errors before an update rollout or product deployment, and ensures happy end-users.
What is Retesting?
what is retesting
Contrary to regression testing, retesting is done to test whether a particular feature or functionality that has been developed, tested, and released is working as expected. It’s generally carried out after significant modifications to the code or software are made.
When running retests, your goal is to determine whether any known bugs have re-emerged or whether new bugs have appeared.
When Is Regression Testing Done?
A variety of cases entail the application of regression testing. This includes after installing a new feature, upgrading or patching, and releasing a new product.
The most common types of regression tests can be summarized as follows:
Implementation of a new feature – For example, when there has been a refactoring that may have removed some code, which was required but no longer exists – regression testing, in such case, would address the assumption that all the code needed for the new feature will still function properly.
Updates and patches application – Regression testing can be done before or after installation to ensure that the changes are compatible with the existing software. Likewise, a regression test can be run after deploying a new release to ensure no unintended consequences were associated with the same.
When is Retesting Done?
The most common types of retest cases are summarized as follows
Retesting is done when there are bugs in the software, problems with the reports, or re-emergence of other issues. A re-test ensures that these issues no longer appear.
Changes in the business logic – The numbers of a report may have changed, a new field has been added to a database table, or some other notable change has been made that would affect the overall functioning.
Regression v/s retesting - The difference
These are similar-sounding terms but not interchangeable. Retesting ascertains that a specific code fix works as expected.
Regression testing, on the contrary, ensures that the entire system works as designed after changes. Regression tests, as such, have a much wider scope of activity when compared to retesting.
Retesting usually takes place near the time of code development. Contrarily, regression testing is further along the development lifecycle. These tests need more time for execution as compared to retests. Complete regression testing demands testing for all aspects of the system and needs adequate, system-wide monitoring.
Types of regression testing
There are three primary types:
Unit regression testing
These tests are independent of aspects outside of the unit as they test only the specific unit. During the test, other system functionalities are blocked.
Partial regression testing
This tests the updated unit and the units that interact with it to evaluate if the changes applied to the updated unit affect the software functions and how. This test is applied when:
There are slight changes to the software
The developers are sure that new codes will not affect any other part of the system.
Complete regression testing
Tests and ensures that the system works as intended despite tweaks and updates. This testing comes to the fore when the updates affect the foundation of the code. That’s when the updates significantly affect the codebase and/or when multiple changes are added to the code.
This testing matters as software updates and upgrades become mainstays that ensure that the program is up secure, stable, and up to date in accordance with the latest technological developments.
Regression testing techniques
It includes functional and non-functional tests and thoroughly investigates the software in search of flaws. Some of the techniques used include:
Retest All
This technique checks every test case to confirm if all software parts are working as intended. This is the most accurate and all-around method of regression testing. But it’s also the hardest and the most expensive technique.
Regression Test Selection
This technique does not run all the tests in the test suite. However, it divides all tests into three categories – reusable test cases, retestable test cases, and obsolete test cases. The tests have specific coverage criteria. Only relevant test cases of the software are tested. The testers can also use a minimization technique and select a minimum set of cases that they feel will do the job.
Test Case Prioritization
This technique assigns a priority to every test case, guaranteeing priority in their execution. General prioritization and Version Specific prioritization are two paths for Test Case Prioritization.
Hybrid Testing
This technique combines Regression Test Selection and Test Case Prioritization. It allows testers and QA teams to choose only the re-executed test cases based on their priority.
When to perform regression testing?
Given its importance, regression testing should become a regular process for developers and business owners to undertake. Here are the main instances for performing this type of testing:
The development phase and as early as the pre-deployment stages of software development
Before releasing new updates. Regression testing should become a part of the regular functionality tests if updates are very frequent and the time between two updates is short
After any bug or error, fix
When updating new features or adding new ones to the system
When making significant changes to the codebase or the system
The challenge with regression testing
Regression testing can begin to get complicated with every session with the introduction of software updates. It can also take up time and resources and is less than optimal when done manually.
Manual regression testing is effort-intensive and can be error-prone as checking the software’s overall functionality each time while conducting tests can lead testers to overlook other aspects of the system, especially during rapid testing.
Automating regression testing emerges as the perfect antidote to this challenge. Organizations can reduce the time and resources expended on this testing, expand test coverage and enable comprehensive, reliable, and faster testing.
Final Thoughts
Regression testing is more involved than retesting, and the benefits of proceeding with this type of testing are considerable. However, there’s no denying that a coalition of retesting and regression testing is helpful in the long run because the thoroughness of the process considerably extends.
Application Programming Interface (API) enables two or more computer programs to interact with each other and exchange data between them without any user intervention.
To ensure that the APIs work without any bottlenecks, the QA teams perform API testing.
What are the Benefits of API Testing?
By now, it has been established that API testing is an integral part of the testing cycle. It helps to validate and qualitatively measure the success of business requirements through a defined test case process. More specifically, it helps with:
Early Evaluation: It evaluates the API early on and measures its success against a test suite. This prevents a lot of loss of time during the development cycle and over-engineering that might be necessary to address minor issues during the later stages.
CI/CD Compliance: Because API testing can be automated, validating the business requirements through continuous integration and a continuous delivery pipeline is smooth. The end-to-end testing cycle is thus more efficient and relevant, as is the quality of the application.
Tight Security: Considering the combined prowess of fuzz, penetration, and security testing that API testing brings to the table, it becomes a robust and authoritative testing method against both the functional and the security anomalies that the API might face.
Time and Cost Reduction: The automation of API testing ensures that the time and resources invested in API testing are well-spent. Besides, the direct impact of API testing in quicker delivery, compliance to standards, better security, and a controlled scope is visible across every phase of development, making for a well-rounded testing strategy.
Platform Agnosticism: API testing ensures that the API can be deployed and tested across multiple technologies, regardless of the platform used. With the advantage of testing on different platforms and keeping the availability of both automated and manual testing continuous, API testing becomes a viable option for cross-development frameworks.
All these aspects serve as examples of the relevance of API testing, especially in reducing the risk of delivering an application that is not yet ready for the market.
Challenges in API Testing
1. Initial Setup of API Testing
Setting up the testing infrastructure is a challenging process in itself, not because it's difficult, but because it's a time-consuming process:
1. It's different from other software testing environments as there's no GUI available. 2. The database and server must be configured per the application's requirements. 3. The QA team has to get access to the testing system, get security approvals, and overcome the developer's resistance to performing testing.
Sometimes the setting up process could go on for one or more days. Once the initial setup is completed, the QA team can complete the API testing process.
2. Sequencing of API Calls
The API calls must appear in a specific sequence to achieve the desired result. But sometimes, the API calls may not appear in the sequence expected. This is especially a problem with multi-threaded apps. Haphazard call sequencing could lead to confusion as the tester will not be able to check if the system is working correctly.
3. Selecting, Testing, and Validating the Parameter Combinations
APIs assign data values to parameters. These parameters pass through the data requests to manage communication between the systems. Sometimes the API would assign two different values to the same parameter. The QA team has to test all the possible parameter combinations in API to check for any issues in the specific configurations. They also need to check if it passes the validation criteria.
For example, the team has to check if all the parameter data use the correct strings or numerical data type and if they fit within the value range.
4. Updating the Schema of API Testing
The schema of API testing (or data formatting and storage) should be maintained throughout the testing lifecycle as it contains the requests and responses of the API. The QA team needs to ensure that any change in the program that creates an additional parameter for API calls reflects in the schema configuration.
How To Approach API Testing?
1. Define the Scope of Testing
The QA team must start by defining the scope of testing. This includes asking critical questions like what endpoints are available for testing, what should the error message display in the event of an unsuccessful request, what is the workflow of the application, etc. Document all the testing requirements to plan and implement API tests successfully. Determine who will execute the API tests and how frequently they will execute them.
2. Set Up the Environment and Integrate Application Data
Once the scope is defined, set up the testing environment, configure the database and server according to the application requirements, and integrate application data with the API tests to test the API against all configurations.
3. Apply Various Testing Techniques
Perform various testing techniques such as functionality testing, load testing, and security testing to ensure that the API works efficiently. Each serves a specific purpose, so test thoroughly before sending to production. Before starting the testing, make an API call to check if the API is operational and try all the input combinations to check if it delivers the expected results.
4. Choose the Right API Testing Tool
Pick tools that can execute API testing efficiently. Discuss the different tools with developers and the QA team, analyze their pros and cons, check the programing language they are based on, and evaluate if they fit the budget. Consider using codeless API test automation tools to automate API testing and empower the business users to perform testing without writing a single line of code.
5. Develop Test Cases and Analyze the Results
Write and execute test cases and check if the actual results match the expected results. For example, check the behavior of the API in the system when there's no return value in response or check if the response time is within the agreeable limit. Check if the API can handle the expected user load and work seamlessly across different browsers and devices.
6. Document Maintenance
When the test cases are executed, document the results for future reference. Also, make it a point to update the document regularly in case there are changes in endpoints or the backend impacting the API.
What is API Testing?
API testing involves testing or checking the application programming interfaces (API) against the expectations, use cases, and functionality needed. As a direct testing approach and a part of the integration testing paradigm, it proves to be integral for validating and understanding the functionality of the API and giving context to API's quality, security, and usability.
Lately, with the advent of DevOps and the associated continuous delivery practices, API testing has become a breeding ground for test automation, and rightly so. Test cases, which are a part of the automated test suite, now help validate the functionality of an API quickly, regardless of its intricacy. Furthermore, the execution of these test cases, which might span multiple iterations, ensure that the API is working as expected and against the business requirements.
At present, the scope of API testing is immense, and the magnified market potential affirms it. According to Global Industry Analytics Inc., the API testing market is projected to close in on the $2 billion market value by the end of 2026. This growth is attributed to the higher dependency on aptness APIs and their ubiquity in the digital world.
Best Practices of API Testing
Here are a few best practices that the QA team can follow for successful testing:
Develop a flowchart to help the QA team visualize the API calls and enable developers to build calls that correspond with the flowchart. This will help in building successful call sequencing.
Test for both positive and negative results. In fact, testing experts recommend testing the API till it produces a negative outcome. This will help the QA team to identify the problems.
Perform API testing for all API input combinations to widen the scope of test coverage.
Use data that would reflect the conditions the API would face in the production phase. This makes API testing more accurate.
Use service virtualization to create a simulated environment and test as many test cases, scenarios, and endpoints as possible. This will enable the QA team to confirm that the application provides an acceptable level of security.
Perform both manual and automated API tests to produce more trustworthy results.
Automate the documentation creation process to ensure that all the test cases are documented and updated in a standardized format.
Examples of API Testing
1. Testing if a mobile app can pull out the necessary information about the user from Google or Facebook
A user gets two options on installing a mobile app - they could either manually enter their details to sign-up or use the login credentials of Facebook or Google to sign-up. The QA team can do API testing to check if the app is able to work in collaboration with Facebook and Google and pull out the necessary information that's required to access the account.
2. Testing the payment integration in a mobile app
A user may want to use their bank account or PayPal account to pay for the products they have purchased from an eCommerce website. The QA team can do API testing to check if the eCommerce website is able to communicate with the banks or PayPal to process the payment and if it can safeguard the user's details throughout the process.
Why ACCELQ for API Test Automation?
Speed up the continuous delivery cycle by reducing the dependency on manual testing
Facilitate end-to-end API validation through chain API tests
Empower testing engineers to make more informed decisions on the run
Maintain your standards and respond to business requirements rapidly
Reduce the cost of development and maintenance of the codebase with a lean testing strategy
Do all that and more with ACCELQ's codeless and automated API testing. We believe in striking a balance between speed and quality.
If you were to prioritize different automated testing initiatives, unit testing would be in the top five spots. It is a central component of modern digital application QA initiatives. Unit testing ensures granular stability and resilience for any large digital application by validating the performance of its component modules.
However, over the years, software testing teams have adopted several different approaches. The diversity in approaches may prove to be confusing and challenging for enterprises when they decide on a unit testing strategy today.
Hence, it is important to know in detail about the best practices for unit testing in detail. But before going into the best practices, it is important to have a clear understanding of itself and ensure that all stakeholders within the enterprise are on the same page while discussing and agreeing upon the best practices.
What is Unit Testing?
In simple terms, unit testing refers to the testing of modular or functional units of a larger software application, done in isolation from other modules.
It tests the resilience, performance, and capacity of individual functional units in real-life conditions. The primary aim of unit testing is to ensure that their actual behavior aligns with their expected behavior.
QA teams conduct this testing without connecting the module to other external entities like databases, libraries, and file systems. In other words, each module is considered a small program that can run on its own and exhibits a diverse set of behavior for every operational scenario. With unit testing, developers get a clear idea of how their individual code blocks perform. It allows them to make quick adjustments in the event of any unexpected behavior.
Why is Unit Testing Important?
As the digital economy expands, the traditional focus on software testing and quality assurance is no longer an acceptable standard for enabling the best customer experience.
Every strategic approach in software testing needs to be given its due share of importance and allocated resources, both manpower, and tools, to facilitate seamless execution. While many leaders may stress giving more priority to areas like acceptance testing, the reality is that every test approach deserves a special focus as any let-downs may impact overall application quality.
Since more businesses are opt for cloud-based microservices architecture for their applications, there is an increased focus on the granular stability of the application. Testing cloud-based apps with microservices architecture are important, and unit testing finds a very crucial role in this regard.
With unit testing, developers can have more control over their individual code block quality before integrating different components and then sent for regression testing. Also, it is easier to identify and rectify mistakes or defects at the code level. This helps enterprises save costs significantly when compared to discovering defects later in the development cycle.
What Are the Best Practices for ServiceNow Test Management?
Now that you have a clear idea of what is unit testing and why enterprises need to focus on implementation, it is time to learn about the best practices.
We have outlined six major best practices for unit testing that will guarantee better results and ensure profitable and sustainable testing coverage for your application development projects.
Create reliable and clear unit tests
The first step to building a great unit testing practice is to write reliable unit tests. The test suite must be able to report the unexpected or suspicious behavior observed during testing. It should not be considered to simply report the stage at which the these test succeeds in delivering an expected output or outcome.
Mandate automation in unit testing
Manual bias may often create unsatisfactory test results. It can lead to future complications if defects detection isn’t in the early stages. It is important to mandate automated testing strategies in your technology operations to prevent this situation. Automated unit testing delivers faster feedback and provides significantly higher code coverage and controlled parallel processing of test cases. It allows test engineers to work more efficiently with less time.
Prioritize attention to use cases
One of the critical aims of unit testing is to ensure that the code is easily maintainable. Moreover, ensure that new developers can understand it without any issues. By focusing on a single use case at a time and prioritizing the rest in sequence, it is easier to validate outcomes against expected results.
Speed up tests with better isolation
You should carry out Unit tests should in isolation without depending on other tests and mocking external dependencies. Making them simpler and independent modules can help in faster code coverage. Automation can boost it further. All these measures help to speed up the test activity. This is a critical factor that makes unit testing more appealing for developers. If it consume too much time, it will be tough to convince developers to wait long to push their code into the next stage.
Integrate unit testing into the build process
While developers following a Test Driven Development (TDD) model will provide sufficient coverage for unit testing, it is always best to make it a part of the build process for additional safety. While going in for end-to-end test automation,the build process will ensure that all unit tests are executed. And will also notify the respective stakeholders about any failure when any test fails.
Follow distinguishable and simple test nomenclature
Test activities are also a stage for documentation of process workflows. If every unit test is named in a standard convention that explains the scenario being tested, it will be easier to document. The further use of these cases across the organization when different stakeholders are trained on applications also becomes easy. The naming convention also enables better identification of defect causes for other test approaches like integration testing, acceptance testing, etc.
Conclusion
Unit testing will be a critical component of modern application quality assurance initiatives. Facilitating a seamless execution of unit testing is what many enterprises will find hard to achieve. This is where automation of test activities can be a lifesaver.
Book a demo to know how ACCELQ can help you automate your end-to-end test initiatives.
Three planes, 25 hours, 10,000 miles. My dad gets off a flight from Australia with one thing in mind and it's not a snack or a shower or a nap. It's November 2016 and Dad is here to talk to Americans about the election. Now, Dad's a news fiend, but for him, this is not just red or blue, swing states or party platforms. He has some really specific intentions. He wants to listen, be heard and understand.
00:35
And over two weeks, he has hundreds of conversations with Americans from New Hampshire to Miami. Some of them are tough conversations, complete differences of opinions, wildly different worldviews, radically opposite life experiences. But in all of those interactions, Dad walks away with a big smile on his face and so does the other person. You can see one of them here. And in those interactions, he's having a version of what it seems like we have less of, but want more of -- a constructive conversation.
01:13
We have more ways than ever to connect. And yet, politically, ideologically, it feels like we are further and further apart. We tell pollsters that we want politicians who are open-minded. And yet when they change their point of view, we say that they lacked conviction. For us, when we're confronted with information that challenges an existing worldview, our tendency is not to open up, it's to double down. We even have a term for it in social psychology. It's called belief perseverance. And boy, do some people's beliefs seem to persevere.
01:53
I'm no stranger to tough conversations. I got my start in what I now call productive disagreement in high school debate. I even went on to win the World Schools Debate Championship three times. I've been in a lot of arguments, is what I'm saying, but it took watching my dad on the streets of the US to understand that we need to figure out how we go into conversations. Not looking for the victory, but the progress.
02:22
And so since November 2016, that's what I've been doing. Working with governments, foundations, corporations, families, to uncover the tools and techniques that allow us to talk when it feels like the divide is unbridgeable. And constructive conversations that really move the dialogue forward have these same three essential features.
02:46
First, at least one party in the conversation is willing to choose curiosity over clash. They're open to the idea that the discussion is a climbing wall, not a cage fight, that they'll make progress over time and are able to anchor all of that in purpose of the discussion. For someone trained in formal debate, it is so tempting to run headlong at the disagreement. In fact, we call that clash and in formal argumentation, it's a punishable offense if there's not enough of it. But I've noticed, you've probably noticed, too, that in real life that tends to make people shut down, not just from the conversation, but even from the relationship. It's actually one of the causes of unfriending, online and off.
03:43
So instead, you might consider a technique made popular by the Hollywood producer Brian Grazer, the curiosity conversation. And the whole point of a curiosity conversation is to understand the other person's perspective, to see what's on their side of the fence. And so the next time that someone says something you instinctively disagree with, that you react violently to, you only need one sentence and one question: “I never thought about it exactly that way before. What can you share that would help me see what you see?” What's remarkable about curiosity conversations is that the people you are curious about tend to become curious about you. Whether it's a friendly Australian gentleman, a political foe or a corporate rival, they begin to wonder what it is that you see and whether they could see it to.
04:46
Constructive conversations aren't a one-shot deal. If you go into an encounter expecting everyone to walk out with the same point of view that you walked in with, there's really no chance for progress. Instead, we need to think about conversations as a climbing wall to do a variant of what my dad did during this trip, pocketing a little nugget of information here, adapting his approach there. That's actually a technique borrowed from formal debate where you present an idea, it's attacked and you adapt and re-explain, it's attacked again, you adapt and re-explain. The whole expectation is that your idea gets better through challenge and criticism.
05:35
And the evidence from really high-stakes international negotiations suggests that that's what successful negotiators do as well. They go into conversations expecting to learn from the challenges that they will receive to use objections to make their ideas and proposals better. Development is in some way a service that we can do for others and that others can do for us. It makes the ideas sharper, but the relationships warmer. Curiosity can be relationship magic and development can be rocket fuel for your ideas.
06:16
But there are some situations where it just feels like it's not worth the bother. And in those cases it can be because the purpose of the discussion isn't clear. I think back to how my dad went into those conversations with a really clear sense of purpose. He was there to learn, to listen, to share his point of view. And once that purpose is understood by both parties, then you can begin to move on. Lay out our vision for the future. Make a decision. Get funding. Then you can move on to principles.
06:56
When people shared with my dad their hopes for America, that's where they started with the big picture, not with personality or politics or policies. Because inadvertently they were doing something that we do naturally with outsiders and find it really difficult sometimes to do with insiders. They painted in broad strokes before digging into the details.
07:28
But maybe you live in the same zip code or the same house and it feels like none of that common ground is there today. Then you might consider a version of disagreement time travel, asking your counterpart to articulate what kind of neighborhood, country, world, community, they want a year from now, a decade from now. It is very tempting to dwell in present tensions and get bogged down in practicalities. Inviting people to inhabit a future possibility opens up the chance of a conversation with purpose.
08:10
Earlier in my career, I worked for the deputy prime minister of New Zealand who practiced a version of this technique. New Zealand's electoral system is designed for unlikely friendships, coalitions, alliances, memoranda of understanding are almost inevitable. And this particular government set-up had some of almost everything -- small government conservatives, liberals, the Indigenous people's party, the Green Party. And I recently asked him, what does it take to bring a group like that together but hold them together? He said, "Someone, you, has to take responsibility for reminding them of their shared purpose: caring for people.” If we are more focused on what makes us different than the same, then every debate is a fight. If we put our challenges and our problems before us, then every potential ally becomes an adversary.
09:15
But as my dad packed his bags for the three flights, 25 hours, 10,000 miles back to Australia, he was also packing a collection of new perspectives, a new way of navigating conversations, and a whole set of new stories and experiences to share. But he was also leaving those behind with everyone that he'd interacted with. We love unlikely friendships when they look like this. We've just forgotten how to make them. And amid the cacophony of cable news and the awkwardness of family dinners, and the hostility of corporate meetings, each of us has this -- the opportunity to walk into every encounter, like my dad walked off that plane, to choose curiosity over clash, to expect development of your ideas through discussion and to anchor in common purpose. That's what really world-class persuaders do to build constructive conversations and move them forward. It's how our world will move forward too.
10:28
Thank you.