This video is a step by step guide to leverage Analytics to make smart decision on QA Testing / Test Automation projects and increase efficiency.
This video is a step by step guide to leverage Analytics to make smart decision on QA Testing / Test Automation projects and increase efficiency.
Accelerated mobile QA and test automation
Manish: Hello, and welcome to the webinar from Infostretch Mobile Test Automation, Lessons from the Trenches. My name is Manish Mathuria, and I’ll be walking you through this webinar. Thank you for joining the webinar, and we’ll start now. If you want to get in touch with me post this webinar, here are some of the details. You’re welcome to write to me on my email address or connect with me on Linked In as well as, as you already might know, we have pretty significant log activity about mobile testing and mobile automation, which is pretty near and dear to our hearts at Infostretch.
So please feel free to check out our blogs at blog.infostretch.com. Great. So moving on. To walk through mobile landscape and testing challenges, we will talk about mobile test automation tool categories. So we’ll look at the broad categorization of mobile test automation tools. And then we will dive into mobile automation best practices, which is the crux of this presentation about how mobile test automation, how does it differ from normal desk top test automation or web test automation, and what are some of the best practices around it? Finally, we’ll walk you through a short case study on one of our bigger clients where we tested an automated mobile web application on multiple devices.
And this is airline solutions web app. Finally, we’ll open up to Q&A. So feel free to write your questions on the online webinar question and answer mechanism, and we’ll be happy to answer that. So going right in, let’s look at the mobile landscape and how that mobile landscape is reflecting on testing challenges that today’s QA teams and test automation teams have to deal with. So mobile is definitely on the fast track. This is some of the data on how mobile landscape is unfolding. As you can already see, the gap between the human population out there and the qualification of mobile devices is closing really fast.
So we already are over the number of people the world produces on an everyday basis. And if you compare that to the devices that we create, and this is just the smart phone devices and the tablets, we are way over that. So basically, the point here is that the reach of smart phone devices and mobility, in general, is happening faster than you and I can imagine. This slide and the next talks a little bit about how mobility is getting more and more prevalent in the Enterprise space.
As we have known, the mobile applications have been, for the past two years, ever since Apple and Android app stores because more common place, and we all know about the number of applications that come and get download from these app stores, in past year or so, in the Enterprise, there are significant advances in mobile applications. So some of the content in this slide and the next talk about where enterprises are. So as the second part of this slide shows, there are definitely certain industries in which mobile applications and mobile solutions are very, very common place. Healthcare is one of the top ones.
Even more early adopters of technology, as always, is financial applications. Travel and retail industry is big on mobility. So there is a lot of advances in the Enterprise world for mobile application. And this gap is going to close as fast as it can. There is also an interesting data point, and this is relevant as of 2009/2010, I’m sure that it is a lot more prominent now that it states that 40 percent of brands out there, which is Enterprise and consumer brands, have developed more than 30 applications. So if you work for a large company, if you work for Enterprise, or if you have anything to do with consumers, chances are that you have more than one application already being worked upon at hand.
Also, it’s important to talk a little bit about mobile, the development lifecycle, and how short the mobile app lifecycle is to speak intelligently about why testing is important and how it has an impact on testing. And there are obvious reasons for it. As the innovation continues in the mobile space, there is significant number of use cases, the user interaction, mechanisms with the mobile devices. Customers demand how your computation reacts to it. There are all of these reasons that have an impact on the very short lifecycle that mobile applications have.
So this is an example of how the evolution of web browsers, which is one of the most prominent technologies that we have known thus far that have impacted the consumer and the end user interaction with the computing devices. Between 2005 and 2011, as the slide says, there were four or five releases of prominent browsers. So Internet Explorer, all the leading browsers, Chrome, Safari, Firefox, etc. had four or five releases, which approximately means that they are on a release cycle of twelve to eighteen months. Now, contrast that with year’s release of Android OS. We went from Version 2 to Version 4 all in the year 2011.
Compound that with the sub releases of this OS, and the . So just the level of activity that happens in the mobility space is huge. And it’s very significant. And that has a direct impact on how we look at testing of these applications and how we automate them. Again, the same behavior was there on form factors and other operating systems. So the beginning part of this presentation, which I walked through right now, it has a message. And the message is that if you were on Enterprise, no matter what industry, obviously, LVC, there are certain industries which are more prominent and certainly those which are less prominent.
But no matter what industry, whether you’re a C level executive who cares about a BI solution or not, they do care about your mobile app. You have unique kinds of challenges on your development and. It’s here to stay. And the only way to tackle this is think about how to deal in an agile world and how to deal with automation. So that’s a good stage to the next part of the presentation, which specifically dives deeper into mobile automation. So as we look at mobile automation and the challenges around it, I think it’s wise to divide it into categories of prominent automation tools or prevalent automation categories, if you will, of these tools at the technology level.
And as we go into it, you will see why. So the way we see it, there are three main broad categories of automation tools. One is a set of tools that work at the HTML level. So which is working in the browser or in the applications. And we can automate these things kind of like how we have been automating websites and web applications. But we can do that on the mobile devices. The second broad category is things that we work on the native platform. And finally, there are third party tools, which we call as platform independent mobile automation technologies that are available.
And they are very rapidly evolving. So how these tools work in the past six months is a lot different than how they are right now. And we at Infostretch take a very close look, this being one of our prime activities. We take a very close look at available tools and technologies out there, and we publish that on our blogs. So I welcome you to look at our blogs and see how we use some of these tools. So digging a little bit deeper into these categories, mobile HTML based automation pretty much drives HTML and Java script. So when you talk about automation, what you have on your web page or your cross platform app that tool is trying to drive that.
So that tool will recognize every button, every link, every widget on your HTML based presentation layer and drive it. It will recognize a web control. The positives are that, by virtue of that, it is automatically – it works across device platforms. The underlying mechanism of working on the user interface is the automation tool is aware of your objects and, therefore, it is very robust and resilient to changes in your underlying app. And of course, it is non intrusive to the application. You don’t have to change the application cord in order to make it testable.
The downside is, of course, limited to web and cross platform and HTML 5 apps. So you can’t test native apps using these category of tools. The second category, as we discussed, is native platform automation technologies, which means that, with iOS, there is an in built iOS automation technology. With Android, there is a similar one. This is by far the most powerful and most intrusive approach you can get. You can pretty much test anything and everything in your application’s user interface. However, it will require port access written specifically for a device platform.
So if you have an app that is developed once for Android and once for iOS and once for Blackberry, you, unfortunately have to stick to creating platform specific automation. So just like you develop your app three times, your test automation code will also be developed three times. Now, there is this interesting category of platform, and even then, mobile automation. For my benefit, I kind of divide that into two types, Type A and Type B. So Type A is something that actually uses screen based recording and screen based optimization. OCR’s is optically character recognizes the contents of the screen, and it helps the test to actually work with the content that is showing up on the screen by what you are OCR.
And the benefit is that it will actually work across platform. So since it is working at the screen level, it works across the devices and across platforms. And the side benefit is that it has access through the whole device. So if you have a scenario where you test something on an app and then send an email and then come back to the app and check the network, etc., it will work across the whole device, not limited to your application. But on the downside, it has limited object awareness, and it relies on email and OCR capture on the device.
Now, on this basic technology, recently, there are several tools that have come up that I call Type B, which, actually, take this basic approach and then make it more objective area. So using one script, you can actually deal with creating test automation code that will actually work across devices. And it will work in an objective area where your automation code actually is dealing with the widgets on the screen. So sometimes, this could be intrusive wherein you need to create a special build, or you need to compile the tools library within your code or the tool to be more object aware of your application such that it can insert test code in the testing process.
But it definitely is a lot more powerful than Type A. So the purpose of defining these broad categories is to help identify what tools would work for what kind of applications based on the application categories or application character stakes and how to implement that tool best for a particular character stake. So getting into specific best practices or lessons learned around test automation, so I am dividing these lessons learned into four main categories. How to select test cases and devices for mobile automation, what kind of scripting challenges one has to deal with, and how do you really deal with the underlying problem of mobile automation, which is fragmentation?
So even on an Android device, you may have to put smarts in the script such that it can recognize the nuances between two different Android devices and deal with it. Finally, and then we look at dealing with special device conditions, how do you address that? And finally, what happens when you actually execute test cases? What are some of the best practices that you can build around when you are creating test execution? So device selection, obviously, the question that we are trying to answer in this is how do we maximize our test at minimum cost and time?
The number of devices, depending on what is the objective of your application and who is the target audience, the number of devices that may exist for providing good coverage can be outside the reach of any practical mechanism to test. So you may have hundreds of devices on which you may need to test in order to provide good coverage. And that often is not practical. So no one has to figure out how do they maximize the coverage, what mechanism, what technique do they use to do it at the minimum cost and time? And these are some of the factors that we look at when we identify what is that device and test metrics that we will come up when we actually do testing and automation?
So a few of the characteristics are types of the apps. So what is the basic nature, function, and job of the application determines what devices to use for testing and automation. So if you are developing a game, certain types of devices will be more prominently used. If it’s a business app, certain other types of devices have been more prominently used. A lot of times, this information can be available either from the marketing department. We, at our company, track a lot of this information fundamentally in our devices’ repository and in our knowledge base such that we can come up with the right metrics for the right problem.
And we typically do that routinely for our customers. User personalities and other categories where, depending on not just the type of the app, but who is using it. Is there a teenager using it, a business traveler using it? Is it your typical consumer using it? Is it a social type of app being used by a senior or by a mom? Depending on that, devices matter. Geography, of course, is relevant because there are different devices that are released in different parts of the world geography. And one needs to worry about that. What app functions are possible with a particular device? Again, this means that there are certain devices that have certain capabilities and certain devices don’t.
So if you are streaming, if your app is written for a specific screen, all of these factors have to be when you are identifying the devices. Also, device popularity, so there is a lot of data that we mine and we track based on which we can tell what devices are popular at a particular geography and whatnot for a certain form factor and OS. So those are some other factors. So the purpose of doing this analysis is to eventually come up with a device OS test metrics. So the idea is that we will come up with a set of devices and underlying OS’s on which we will do a certain level of tests.
So it will typically look like, on the base OS of the device, we will do a full test. And on all the future OS’s, we may do a partial, small test, and so on and so forth. So the idea is that we will come up with this metrics so that, when we get into the testing phase, we know precisely what coverage we are getting and why. And all that coverage satisfies our business. How do you select test cases? And this is particularly for automation. For manual testing, your test kit selection would depend on the coverage you want to get and the type of application. But out of that universe of test cases that you have identified, what are good candidates for automation?
That is a problem or that is a question we are typically trying to answer. So right off the bat, there are certain test cases, which are not automatable. So the test cases that have interaction with system or peripheral things that want you to take a picture, or if there is a bar code scanning type application that you need to scan a bar code, and you need to scan multiple different bar codes, it may not be automatable in the normal usage form that you or I would interact with the device. There may be other techniques to automate it, and then you could feed in a bar code programmatically to the device or to the app.
But in its normal sense, it may not be automatable. So we need to identify that. Depending on what tool you use, interaction between multiple apps, OSN application or multi domain type of test cases may not be automatable. So again, that is something that you need to keep in mind is if your test evolves, you want to send a text message while you are testing something or receive a text message that may not be a great candidate for automation. Special issues or special conditions identify doing location of area testing, field testing, again, are types of test cases that you may not automate.
So this boils down to the test cases that you can and should automate are functional regression tests that give you the most value from automation, you should definitely look at automating them. So another factor is tests that are stable or part of the app that are stable and are going to change less. So again, this is standard automation best practice that not just applies to mobile, it applies to any presentation layer automation where you want to test areas which are most stable and are less likely to change that you have good understanding on the business processes.
Also, the best practice that we like to employ is that we would test which are strict test cases that are medium complexity, and then we go to the high complexity test cases, and then we go to low complexity test cases. Similarly, in the smoke test category, we’ll go with high priority, medium priority, and low priority test cases. So these are the different character stakes or best practices that we use to identify what test cases you will automate. So what are some of the scripting challenges that we come across typically, and how do we deal with them? How do we deal with fragmentation?
So as you know, there are genuine issues around doing automation at the user interface level. And those actually get multiplied many fold when you talk about mobile automation. For example, this light shows that, on the same OS, same based OS, which is Android 2.3, on two devices, the screen shows very, very different than each other. And if your automation script assumes that it will be shown the same, then you may be in for surprise. So this is an example of how fragmentation impacts you while doing automation. So also, the level of differences that may exist in terms of due to adjust because of the platform and the device and form factor could be significant as well.
So when you write automation scripts, it is very important to design with this problem in mind where your one functional test case will actually, inside it, will have to deal with so many different variations in different form factors and different screen reservations and device types that, if you don’t design it like that, you may be in for a lot of surprises. So again, this is an example of how form factor impacts the content and how it looks. For example, how it looks on two different Android versions and on an iPhone can be quite different than each other. So another important point is that do not do automation on web browsers.
And this is particularly applicable for cross platform applications or mobile web applications where you may feel that actually testing it on the web browser may suffice. But in reality, it’s not. Kind of the same thing applies for similar tests. So on a desktop browser, how an application may look would be quite a bit different than how it looks on an actual device. So how do you deal with fragmentation? When we design test cases, we basically build – we develop our automation code in a layered approach.
What that means is that, at the level of code which is actually interacting with the devices, we implement code methods that will actually interact with a specific type of the device, and we write multiple parallel methods like that as components that can be used in an overarching test case. So if a test case script has a requirement to press a button that button press method could be implemented differently for different device platforms. So by layering our test cases or layering the code of our test cases at component level, which interact with the devices and, at test case level, which, actually, implements the business functionality of the test case, we can address the fragmentation and easily maintain the code and also extend our test cases when new devices come in.
So for example, if a brand new device comes in, we may need to write completely new functions to deal with that device’s UI, which may be different than other devices on hand. So dealing with special conditions, so what do we mean by these special conditions. So as we know, apps misbehave in different conditions that may not be most optimal conditions in which we develop the application. And these could reflect from how much CPU is being used, how much free memory is available, what network conditions we are in, how are the other apps interacting with your device, what environment conditions we are in.
Believe it or not, things like humidity, things like temperature, etc., has an impact on devices for. And we have actually seen test cases misbehave or test cases fail because of environment conditions. If your app is using sensors such as ambient light sensor, or it’s using camera and stuff like that, light conditions could matter as well. So how can we create automation test cases that will reflect some of these conditions on the device and enable us to create test cases that develop the device under these conditions? So before we get there, we have some of the handy tools that we use frequently to assess what state the device is in.
For example, on Android, there is an app called system panel app or task manager app. It is a really cheap investment that will help significantly in doing automation or doing manual testing. Similarly, on iPhone, there are system and monitoring tools that are available that can be purchased from the app store. So coming back to creating these special conditions, to aide our automation, we have developed tools and components that will get the device into a special condition thus that we can then do automation. So we can parameterize our test case to have this component and get the device to a certain state.
And then we automate the test case. So some of the best practices. One of the challenges with automation is that test cases will randomly fail or your automation will stop abruptly when it is executing because of various reasons. There could be test case patterns. So there are many reasons test cases will break. So what we have done on all of our automated test cases, we have invested in a robust test recovery system. What that means is, if we detect that a test case has failed, or if a test case has more like aborted, we detect that, and we can read on the test case.
And that’s very important because, as I previously said, the test cases will fail. So we have not only, like I said, developed the robust test recovery system, but also, we do extensive logging from the test cases so that when a particular test case fails, we can go in and later debug why a particular test case failed. This is an example of the amount of logging we would do. This is our report back shows what the test case was doing when it was running, what specific sessions passed, what specific sessions failed, the logs from our test cases also reflect in our report. So we can do a ton of analysis of what that test case went through.
And it is readable, even by a layperson, somebody who understands the application well. So moving on, continuous integration is another element of managing test execution well. And what this implies is that it always helps creating a test automation management tool that allows one to actually execute the test cases from a CI tool like Jenkins and run it on a periodic, continuous basis so you can create some kind of trending across builds. So the CI is an effort that has a very easy ROI, very quick ROI. It will pay off in days.
Other than somebody trying to execute these automated test cases manually, thereby, in fact, defying the whole reason that you automated this thing. So invest in a continuous integration tool, and invest in a dedicated set of automation devices against which you can actually do the automation. So we have our CI, most of our automation, actually, test cases running against CI to Jenkins, which allows us to monitor the test cases executing on a periodic, nightly basis. This brings me to the final part of the presentation, which is a quick walk through of the history, which is a mobile web application, which allows the users to – it’s basically a travel application.
Our automation test tools, actually, help us. We use Selenium to automate it. So we created a test lab with web driver installed on individual mobile devices. And we could execute our test cases from a remote CI tool, a Jenkins based CI tool, where we could execute the test cases using the web driver on these devices and collect the results in a repository within CI and do reporting of that. So this is the overall test environment. Some of the technical challenges we came across while automation are, as listed here, Selenium web driver is something that is still maturing.
So the best practice that we learned was to build the web driver code frequently, as frequently as is required to, because they are constantly fixing problems and bugs on the web driver. And it helped to kind of take the latest build and build it. Element ID discovery helped us. We needed to kind of find novel ways and interesting ways to identify the elements against which we were automating. So commonly used techniques of are not supported on all malware devices. So we had to actually work with a level of to build some testability inside the mobile web application that they were developing.
There were differences in terms of the Android web driver and on iOS web driver, which, also, we had to recognize and specifically code with. So these were some of the technical challenges we came across. Lessons learned from this case study. So automation team, it totally makes sense to have them co-located with the development team, particularly in an agile mode where the automation teams can actually interact with the development team, like I said. There are several conditions or several states in which it is important to build testability into the development code.
And therefore, it makes sense to co-locate the automation and development teams. I talked about developing the scripts with keeping in mind about fragmentation. So it pays a lot to design the scripts in a very component oriented way such that, if things change, if a particular part of the application changes or a new device gets introduced, you are changing it in one place as opposed to changing the script in multiple places.
Also, it helped us a lot to actually get an agreement of devices that we needed to support because what we identified was doing automation on a brand new device takes its own due course as we have to redevelop all of the components, specific components, unique components that particular device is going to need. So having a good understanding there is a lot of in Selenium and iOS Android web drivers. So it pays a lot to actually do this build yourself such that you can take the evolving changes that are happening on this technology and use that in your automation code.
So I would stop here for questions. You can write the questions in the Web X tool, and I will take questions now. So one question is about that I get commonly asked is how do you deal with these special conditions when you’re interacting with peripherals and interacting with test cases that actually need to use the peripheral such as a camera, how do you automate those? And there is no easy answer there. You can code your test cases, or you can design your automation test cases such that you can feed some of these interactions to the test case.
For example, if your camera is going to scan a bar code, and based on that bar code, your test case is going to do something, you can actually do the test case such that, instead of really using the camera, you can have the test case actually read the bar code picture from a file system and do your test case validations against that. So there are several such smart techniques that you can use against your test cases and, thereby, proceeding with your automation test cases without really having to get stopped against these challenges that are imposed by automation technologies that we have today. Okay.
So I don’t believe that I have any other questions. So I will stop the presentation right now. And I shared my email address in the beginning of the presentation. Please feel free to write questions to me on that email address, and I will be able to respond to them. Thanks for attending this webinar, and we look forward to getting your feedback and having you attend in our future webinars.
Leveraging JIRA and Scrum
Insights from Forrester Research and Perfecto
Leila Modarres: Thank you for joining us. Today’s topic is continuous quality. We’ll be discussing what this means when it comes to the mobile application development lifecycle and how your organization can effectively maintain continuous quality to stay agile and stay ahead. We are very excited to bring you our featured speakers today. This includes Info Stretch’s own chief technology officer, Manish Mathuria. Manish will be joined by Carlo Cadet, product lead at Perfecto Mobile, the leader in mobile application quality. And we’ll also be getting perspectives from Michael Facemire who is the principal analyst at Forrester Research and the leading expert on mobile software development.
This is a two part presentation series. So Part 2 of this webinar will be a firsthand demo of Info Stretch’s own solution hosted by Manish Mathuria. Just a quick note before introducing our speakers, you will have the ability to ask questions by using your question pane. So simply type in your question, and click send. If we don’t respond to your question during the webinar, we’ll make every effort to get back to you individually after the session. So with that, I would like to turn it over to Michael Facemire.
Michael Facemire: Thanks, and thanks, everybody, for joining today. As we talked about, or as was mentioned here, we’re talking about continuous quality and how to drive continuous quality in mobile. Given my background, here at Forrester, I’m the principal analyst that covers web and mobile development. But I’ve also been a mobile developer for a long part of my life going all the way back to the pocket PC, Windows CE, Palm Pilot days. And we’ve seen a big transition happen from those early days of mobile into the current state of mobile. But one of the biggest things that’s happened is the demand for mobile everything.
And I’m sure those of you on the call are hearing this that your customers want a mobile version of what you have. Your internal business wants mobile versions of their current business prophecies and business tools. And they’re also asking for it at a rate that has never been asked for before. In the past, they may have asked for something to be delivered tomorrow. But there was always a little chuckle to it. Now, when they say we want a mobile version of your ERP tooling, they don’t laugh when they say they want it tomorrow because, if you don’t deliver it tomorrow, they’ll find another option.
And that’s one of the biggest drivers of change that we’re seeing in mobile. And so not only do we have to deliver it immediately, but we also have to continue to update it and continue to keep pace with the changing demands for it. And then, on top of all of that, kind of an underlying current is it still has to have that enterprise level of quality that we’re all used to. And so if we think about continuous quality, the word continuous doesn’t mean it starts at some point and continues past that. Continuous means from literally Step 1. And so to do that, in our research, what we found for folks that have been able to deliver continuous quality is that they start from Day 1.
And they, literally, take the concept of quality and move it to the left. And so what does move to the left mean? When we kind of display a project in a GANT chart or in some of the project tooling, it always goes in a timeline from left to right. So when we say move quality to the left that means quality has to be a part of what you do from Day 1. So a developer coming up to me and saying, well, it works on my machine, or it was fast enough in my environment, my environment that was on a perfect Wi-Fi network that was unencumbered by other users with a pristine device without any other apps installed on it, that’s just not good enough these days.
We have to move quality and the monitoring of quality to the left. Similarly, performance is part of a raw quality. So understanding what our KPI’s, our key performance indicators, our key business indicators, knowing what they are, being able to quantify those, and moving those to the left, that’s equally important. So many times, I go into development shops in my job here at Forrester, and I see that they’re in Sprint 3 or Sprint 4 or Iteration 4, however you term it. And I ask what does the performance look like right now. And they say we don’t start checking for performance until we’re ready to ship because not everything is in place yet.
So the numbers will vary dramatically. And the reality is if you’re not checking for that all of the time, I guarantee you you’ll make decisions that don’t take performance into consideration. And, therefore, those end up being very, very hard to fix later on down the line. And then, finally, this is one that’s near and dear to my heart as well, in addition to quality, in addition to performance, a normalized way of accessing data through standard, consistent, consumable API’s. Move that to the left as well, and make sure that you have a consistent way of accessing data all of the time. So with that in mind, that’s changing how we write software.
So I had mentioned earlier the timeframe in which we have to build software is changing significantly. So when I first started professionally writing software back in 1997 timeframe, we had a very, very waterfall driven process. The overall process took anywhere between 12 and 18 months. Everyone in every discipline knew when they were kind of in the spotlight, when the design team was doing their thing, and when the development team was doing their thing, and when the QA team was doing their thing. And when it wasn’t your time, you kind of just sat back and had an easier life.
Well, now, that’s changed considerably because now, what we see for successful mobile is it tends to take anywhere between three weeks and two months, maybe upwards of three months for a given project from time of inception to time of delivery. And so therefore, that has to change the SDLC considerably. Early on, a couple of years ago, when a lot of folks were just getting into the modern wave of mobility, what that meant was they simply would cut off quality at the end. They said our standard waterfall process, we could get design and development done in those first three months.
And then, we would spend the next three to six months on QA. And well, so they would just whack that last step. And so they would shift to the market with no quality at all. Obviously, that’s a terrible idea. And we need to move quality to left. And that’s changing our SDLC. And one of the biggest things it’s changing is this. We’re moving away from a model in which some old guys sitting in a conference room with white hair that look like these two fellows that influenced my childhood significantly, they would sit and come up with the requirements on their own in a vacuum because we knew what users wanted.
And we knew what we needed to deliver to them. Now, the modern version of this is a very feedback driven lifecycle. So in the past, a project was done when we met a certain number of requirements. These folks on the previous slide would come up with 70 requirements, and we would only ship when 68 of those requirements were done at a certain level of quality. And that quality was defined by test cases that were defined at the same time. Defined before any development was done. So 700 test cases that were very structured and very regimented about what would define quality of this project.
Well, with mobile, that’s really, really difficult to do because you don’t know how folks are going to use your app until it’s actually in their hand in the wild. For example, we saw one app where a company wanted to provide coupons to folks when they were in the mall. And what they found was they would be on display on the coupon screen, but nobody ever accepted it. And only once they actually saw people using it in the mall, they realized folks would be pushing a cart. Folks would be pushing their stroller. And the accept button was in the far upper left of the screen, and so most right handed people couldn’t get their thumb up there.
So they would see the coupon and not be able to click it simply because it was a little bit too hard. And so they would just hit the home button and make it go away because it wasn’t usable. And so only until you actually see it in the wild do you know what the real requirements are. And it might be just a subtle change of where a button is put. But it’s getting the feedback from your users is critical. And so this is what we’re seeing the feedback driven lifecycle looks like. It’s defining objectives. Establishing and being able to quantify the performance indicators, the quality indicators. Create a base minimum viable product.
Quantify that feedback from your users using feedback tools, watching users use it in real life. Align that feedback with the initially defined KPI’s. And then, rinse and repeat. And this is how we’re seeing folks really respond to the changes in the mobile SDLC. So with that in mind, Manish, question for you. As folks are making this transition into a feedback driven lifecycle, first, are you seeing this move made? Or are you seeing folks try to do this in a different way? Are they trying to address the time and market challenge and the changes to the SDLC in a different way?
And if they are moving to a feedback driven lifecycle, how is that affecting them? And how are you at Info Stretch able to kind of take advantage of this?
Manish Mathuria: So Mike, that’s a great question. And most certainly, we are seeing people realizing the need to shift left and also to introduce the feedback. In fact, the market is pushing them. The example you gave is a very pertinent one where the market or the customer is asking them to incorporate feedback in the software driven lifecycle.
In my presentation, I will certainly be talking about some of the techniques that the development team or the entire team can incorporate in their process, the software development process that helps them move things left and incorporate more feedback and keep the alignment between different parties, QA’s, developers, etc., continuously on the same target. So the answer to your question is certainly yes. It’s happening in SCRUM teams. It’s happening from the customer side. But feedback is extremely important.
Michael Facemire: Yeah. And are there key challenges that folks are running into that you’re seeing that you’re able to help with?
Manish Mathuria: Yes. And the direct manifestation of this is on the software development lifecycle. Teams are not used to getting this feedback in real time. And forget market feedback. A lot of it is actually feedback within themselves. How do you keep the testers, developers, product owners, business analysts, etc., on the same page with respect to continuously changing requirements? And the agility that the market demands is fairly critical. And like I said, I’ll be talking about some techniques that we have developed, helped develop with our customers that actually have produced very good results.
Michael Facemire: Gotcha. Good deal. Carlo, from the Perfecto side, can you shed some light on shifting quality to the left and what that means from your perspective and some of the interesting challenges that you’ve seen from the mobile world as folks try to move quality to the left in their day to day SDLC?
Carlo Cadet: Sure. Let me, perhaps, comment on two aspects, Mike. So 1) we’re seeing more and more organizations move to what others call a Dev Test Construct where really A) developers are taking on more responsibility to do expanded testing, and B) that QA teams are actually shifting their processes to really align with a Dev organization. For example, instead of using a commercial tool with a scripting language, they’re shifting now to writing their own test cases in Java alongside developers.
And that’s really making the QA role a far more technical role and more developer-like. And this drives alignment from the beginning between coding and testing and creating a synchronized process. And then, perhaps the second thing in terms of a feedback driven lifecycle, as you said, so organizations are moving to deliver software faster.
And we’re seeing a rising number of people that embrace continuous integration as a fundamental strategy where they’re automating the bill process in that as soon as new code is committed, it triggers a build and test process essentially trying to shrink the time of unknown quality between a change and a confirmed verification that it actually has achieved the outcome. And so these are two areas that we’ve seen happening in the marketplace. 1) A move towards the Dev Test model. And 2) the embrace of continuous integration.
Michael Facemire: Yeah. That’s a great point. That move to the Dev Test model is one that’s near and dear to my heart. As a developer myself, I’ll tell you that nothing real slows down a developer like telling him stop what you’re doing and context switch back to what you did a week ago when you checked in some code because we just now realized that that code you checked in a week ago slowed things down or broke something. So that continuous Dev Test cycle is incredibly important because nothing destroys developer productivity worse than context switching there. Now, curious of your thoughts – your relationship with Info Stretch, how is that benefiting folks with regards to this feedback driven lifecycle.
Carlo Cadet: Absolutely. I think that’s a great question, Mike. Our Info Stretch relationship is multifaceted. And we’re going to learn a little bit more in the webinar in terms of from the technology aspect. There are really two parts that I’ll stress. 1) Is we’ve been using the phrase continuous quality that really nails our perspective and provides the foundation for the integration that we have as a technology partner with Info Stretch where we are providing our continuous quality lab, which is comprised of real devices hosted in a cloud configuration for quality purposes that Info Stretch, with our API, is not taking advantage of to support their test authoring solution.
And, in particular, what I think to this point of shifting left, their support of behavior driven development, which really starts the quality process at inception. So it’s really an exciting technology relationship between the two organizations.
Michael Facemire: Good deal. So at this point, I’ll pass it over to you to expound upon that a bit.
Carlo Cadet: Sure. Thanks for that. So one of the areas here that I want to transition a little bit, Mike, is as we talk about shifting to the left, a fundamental part of the conversation is really about accelerating velocity. We’re shifting to the left and starting the quality process earlier with the fundamental goal of delivering product to the market faster. And this has to do with that feedback driven lifecycle. As opposed to development cycles that were taking 12 to 18 months, we’re seeing more and more organizations shift from annual releases to quarterly releases and, increasingly, monthly releases.
But as they make that transition, Mike, they’re running into some challenges. And I’ve encapsulated on this slide what I’m calling velocity blockers as it relates to delivering better software faster. The first is the realization or the recognition that manual testing, while critical and it plays an important role, is non scalable. And what it really means is that the more manual testing that I do, it’s going to slow me down. And, therefore, the inverse is true, which is the more that I automate, the greater the ability I have to accelerate my process and to take advantage of key techniques such as continuous integration.
A second blocker that we find for many organizations has, again, to do with time, or more specifically, the tradeoff between available time and coverage. And coverage here, Mike I define as both test case coverage as well as device coverage where I might be exercising the full test suite but only doing it on two devices. Although, I know, in truth, that my user population is to cover 50 percent of my users, I’ll need perhaps 30 some odd devices to test. But I’m consciously making a risk based decision to only test on two or perhaps even four devices. And so coverage is really a velocity blocker in the sense that it’s challenging that time and forcing it into a risk based approach.
The third area that we find challenging and uniquely related to mobile is the test lab itself. And really is the test lab in a test ready, always on capability? As we accelerate the process and shift testing left, that means we’re simply doing a rising number of test iterations themselves, which places a greater need for lab availability. Some of the reports that are out there indicate that lab availability is one of the common delay factors within the quality process. And then, the fourth area has to do with multiple teams. What we’re seeing in the customers that we work with, which are frequently large enterprises that development is not one group of eight people in one location.
Rather, it’s done in a highly distributed fashion where they might have several dev centers and, perhaps, a quality COE located offshore. But the key aspect there is to ensure that whatever potential quality impediments are found, whatever defects are found, they need to be efficiently communicated in a way that supports reproducibility and, ultimately, resolution. And so collaboration and providing the right artifacts or, for some in the audience who might know for their developers who label certain defects as simply non reproducible, yes, you might have found something, but, no, I can’t reproduce it.
And, therefore, I’m going to move on. That area is a key velocity blocker because it sustains a growing quality debt through the process. And then, the last element that slows velocity, in particular, is slow feedback. As you mentioned at the outset with shift left, and I think shifting testing or shifting quality left is really a critical idea, Mike, and I’m glad you started there. It has to do with also bringing quality into the main process, into the main development cycle. So really, anything that is done out of cycle, out of that primary development cycle, by definition, delivers feedback slower than had it been included.
And so a good example is, perhaps, after a full test suite is accomplished through sanity testing and a regression suite and even, perhaps, compatibility that that’s when performance testing starts. Well, that’s really late in the cycle. And really, it potentially creates an opportunity to challenge a go/no go decision if there’s unknown quality and questions as it relates to performance because many organizations recognize that users – a very common complaint is simply this app is not performing as fast as I expected it to. So collectively, these areas represent velocity blockers. And so what I want to move, if I can get my computer to participate, is I want to talk about how to unleash velocity.
And really, it starts with automating the process and automating the testing. Many of our organizations are very familiar with the automation test pyramid, an idea that was introduced a number of years ago but really are still in the place where they’re trying to put all of the pieces together. And what that really means is being able to start the process of automating the testing program at the inception. And that’s really when the build occurs being able to find out, for mobile apps in particular, whether that app will run on the device, whether a basic set of smoke tests can be done to execute a sanity for the build.
And then, to discretely exercise both back end testing as well as UI testing. And part of the challenge that we find with many organizations is while they pursue automation, they struggle in areas. And they struggle in terms of developing automation that works. So it’s really critical from unleashing velocity to have a code strategy with your testing that is comparable to your code strategy that you’re going to deliver in production. And that means being able to exercise common techniques such as class libraries or component libraries, or being able to reject the device coverage tradeoff by executing in parallel, and also, being able to control not only the application under test but the device under test.
When we put all of these pieces together, that’s when we see that organizations are able to successfully move from perhaps a 10 percent level of automation to 70 or 80 percent, and, really, being able to put it together with real devices in a test read configuration. And those have a degree of attributes with being able to leverage real devices that are in the markets that our end users are at. And by putting these pieces together and now, thinking about shifting left, Mike, where it comes into the table is how do I do it faster? And how do I do it earlier? And that really starts at the commit level when I’m committing the code.
And many are beginning to adopt continuous integration and then, recognizing the need to accelerate that out of cycle testing that I talked about earlier and bringing it into the cycle. By being able to embed, for example, basic performance data within every test case that’s executed, by embedding timers, or by simply working to vary the test conditions, the networking conditions, to mimic real user behaviors.
When we put all of these pieces together, this is when organizations have successfully constructed the recipe to unleash the velocity by automating both the process with continuous integration and automating the testing, which is both really functional and nonfunctional testing and building that off of a foundation of an always ready lab. And then, lastly, this is just a short overview from the Perfecto Mobile perspective of the components of our solution that Info Stretch builds upon to deliver their test automation, their offering solution, as well as executing BDD.
And it’s when these combine together, this is how organizations are able to accelerate their velocity to deliver continuous quality at an enterprise scale. And with that, that’s what I wanted to share those ideas as we transition to Manish who is going to talk a little further with regards to the integration that they’re bringing to the market leveraging our continuous quality lab.
Thank you very much, Carlo. And thanks, Mike, for doing a great introduction to the topic today that we are talking about. So let me start from where Carlo left off here. Let me first share my screen. All right. So Carlo, I want to talk about the point you talked about where you’re seeing people building and releasing on a weekly or a monthly basis. Actually, not so much in the mobile world, but in the SAS world, we are seeing people releasing on a daily basis. So the emphasis that is put on automation or the ask that is put on his, typically, is that I want to release multiple times a day.
And, therefore, you better get the entire test suite with be done in an hour or so so that I can and release to production. That’s where we are coming to. Of course, with respect to the mobile apps, it is not so much that easy because app stores, etc., are involved in the process. So what that puts pressure on is we have to start thinking about security, about performance, about automation from Day 1 when before we write the first line of code. And that’s like an extreme example of shifting left. So anyway, what I’ll be talking about more is what these challenges bring. I’ll make things a little bit more practical.
I’ll take these concepts home and talk about what challenges agility brings and this particular process brings, and what are some of the approaches to incorporate the feedback cycle in your day to day software development lifecycle and what concepts would help it. I’ll also be talking or showing you certain screenshots, which is a technical solution to automation, which is a joint solution between Perfecto and Info Stretch and giving you a sneak preview of what this solution looks like via a few screenshots and, like Leila said, there will be a follow up presentation or a follow up webinar in which we will do a detailed demo.
So let’s jump straight into it. As we all know, when we start work with agility, there are different parties to the campaign. There’s a product owner. There’s a tester. There’s a developer. And there’s an automation team, and of course, there are multiple people who participate in this. And the challenges that come from this are that working in one team as a product owner produces you a story, then, testers and developers write code. And the automation teams write automation against it.
As time progresses, and as requirements morph through the release cycles, it is a difficult catch up game because, in a two week sprint, you are required not only to write all of the new functional test cases, keep your regression automation up to date, manage all of the code related changes, and do it every sprint cycle, which could be a week or two week cycle. So pretty soon, what we start to observe is that your code and your automation code starts to diverge.
And because your automation is returning a language which is not directly proportional or not directly related to your test cases that are returned primarily in English or in any test language and in a natural language, you often cannot tell how related the automated code is to the test cases. So this starts to diverge, right? Behavior driven development is one approach that is a very strong solution to this particular problem. And what it states is something very simple. What we mean by behavior driven development or test driven development or there are several terms or specifications by example is that you will keep the user stories, acceptance criteria, and the test scenarios closely knit together.
So you will write user stories. You will write acceptance criteria. You will convert these acceptance criteria into automatable scenario. And it’s precisely these automation scenarios that you write, which are returned in English that will get automated. So as a result, what you have is you have this continuously living documentation system that is designed not to produce any divergence. Hence, the feedback loop that should happen from keeping the automation code completely in line with the test specs, and the test specs completely in line with the user stories is defined by design.
So let’s look at what this looks like in real life. So this is a user story. And I won’t go into the details of the user story. You can read it. But bottom line is all it says is that if a room guest returned or cancelled, it should go back to the inventory. Now, typically, this user story would be converted to test cases, which are returned in Excel. And right there, you are introducing a cause for divergence. From that point on that Excel test case will probably be converted into Java code or a QTP code or something like that. Again, that’s, yet, another point where you are introducing divergence.
So as the user story changes, the test specs change, and the code changes. Pretty soon, you don’t know really what you’re testing or what your automating. And when you certify a build through continuous integration, you exactly don’t know what you’re testing. So what BDD said is that you would write a scenario that looks very much like what you have on the screen right now. And you will keep the scenario very close to the user story, perhaps in the same documentation system as or whatever you use for your agile management. And you will keep these test scenarios pretty close.
And furthermore, these test scenarios are exactly what gets automated. So here, on the screen, I have an example where each statement of the scenario has a driver to implement that statement. And that scenario is exactly getting automated by virtue of writing drivers for each of the statements. Now, this brings yet another benefit to automation, which is not easily observable here. But as Carlo mentioned, just like software development, reuse componentization, etc., are right principles for coding automation. So in other words, you don’t want to create automation that, for each test case, is completely disjoined from each other.
You want to write several reusable components that get used over and over again. Therefore, when the code changes, you are changing it in one place and not changing it in 1,000 places. If you think about it, what BDD does also is that it automatically introduces the use because, once you write a test spec, or you automate a particular step in this scenario, by definition, it is reused. So when I create a library of test steps that library of 100 test steps can be reused for thousands of BDD scenarios. Therefore, if my code changes or my requirements change, I have to change a minimal number of test steps.
So let’s look at what is the recipe for continuous quality. So I will try to summarize a few of the concepts we discussed today and then, jump straight into showing you some of the screenshots of QAS and the solution that we have on the table here. First, the one thing we talked about is continuous feedback. Continuous feedback is from two angles. One is from the customers. So there are techniques that are a method that a team can deploy to automate the feedback from the customers by introducing several technologies in the app, which is outside of the scope of our current.
However, internally, we are within your SCRUM team, living documentation and specification by example, which are synonymous for BDD are, by definition, introducing continuous feedback in your software development lifecycle. Second, continuous engagement means that the SCRUM team is always on one page. It is not that the product owner is saying one thing, the developer understands something else, and the tester writes specs for something else, and the automation is trying to play a catch up game with all of these things. What it means is that they system allows all of these parties to speak one language.
And when they say that they have a particular user story that is being tested in a certain way, it is always true. So creating a system of engagement and a process of engagement that actually allows all of these parties to talk together is a strong tenet of continuous quality. The next thing is continuous integration. Carlo mentioned and signified the importance of continuous integration. But we say it this way. If you are not automating the process of automation, your automation is useless. It is mindless to create automated test cases if you are getting to execute them by hand.
The requirement that business puts on you to release your code multiple times a day or make it multiple times a week or multiple times a month is not going to be achieved if you are going to execute your automation by hand. So continuous integration is as important or, perhaps, even more important than actually automating your test cases in the first place. So you automated test cases, as well as your build should be integrated in one tight system where, whenever a build happens, and that could happen multiple times a day, your test cases run right alongside.
And finally, a need for having a continuous environment, so you can catch the drift here by using continuous quality or requiring continuous quality, these several continuous elements have to fall in place. And continuous environment is always on demand, availability of real devices that you can tap on whenever required, whenever continuous integration runs, such that these test cases can be actually executed as you need them. So let’s look at the solution elements. There are two critical components that were going to be talking about. QAS is QMetry Automation Studio.
And Perfecto Mobile cloud Selenium driver is a critical component that integrates the two. That’s the glue between the mobile automation and your BDD test cases. So let’s look at how. QMetry Automation Studio is, basically, an Eclipse based tool that allows that essentially is an authoring platform for creating BDD based test cases that can be automated using multiple drivers. It could be Perfecto drivers. Or it could be any other drivers that you automate your test cases against. Being an authoring tool, we have all of the best practices and principles built into the tool such that you can use through data driven testing, consume your data from any kid of CSV file.
It is a very extensive reporting element built into it. It promotes and encourages and, sometimes, enforces usage of the right kind of design patterns, spacing patterns to be used from the tool itself. And it makes development of BDD extremely easy by creating a very user interface driven method to drag and drop these test steps onto a BDD. I’ll show you some of the screen shots next. So this is what the solution components look like together. So what’s in yellow here is the QAS platform, which a test authoring layer, which allows you to create BDD or Java driven test cases.
And then, there is a fundamental player underneath, which is a repository of your objects, repository of your test steps, of other object libraries that you create to automate and make your automation reusable. And underneath that, there is a set of drivers, and we’ll talk about Perfecto Selenium Driver today, which allows you to actually execute these tests at one time on Perfecto cloud devices via an HTTP rest protocol that is executed every time a test case is executed. So here are some of the screenshots. What this screenshot talks about is the basic structure of QAS as a tool.
What you see to the left is a structure and is a set of format that your QAS project forms, which has clearly delineated areas for storing your resources, for storing your scenarios, and for storing your source code and so on. By being an Eclipse tool and Perfecto Selenium driver has an Eclipse plug-in, and, therefore, it fits nicely together. So what you see to the right here is the Perfecto plug-in in the mobile cloud perspective of Eclipse where we can open a device. We can involve the object and all other mechanisms through which we can create the object repository.
We can get to a particular screen on the app and point and click at a specific object. And it helps us pick up the object. And it suggests what the object locator should be with respect to which we can actually execute that as the case. And it enables you to build the object repository that you see in the middle here, which is a very structured representation of your object by giving it a particular key that gets used in the code, the locator, and a user defined description. By the way, all of these things are, so you can edit it the way you want. And the description is what shows up in the report.
So this is how you pretty much build the object repository that gets used in the code. The next screen shows you the BDD perspective or the QAS perspective, as we call it. And this is a user driven or user defined, user interface driven method of creating your BDD. So it helps you create BDD’s in two main ways. One is you can type the BDD steps or scenario steps rather, and it has look ahead. And it completes your typing by picking up the existing steps that are already defined in your framework. Or you can drag and drop these steps from what you see as your step repository onto your scenario.
And, therefore, you can build the BDD scenarios in a very interactive mechanism. Your scenarios can be completely data driven. So you can define the data in an XML way, in a CSV way, again, in a very interactive manner. And you can also define several user defined attributes or meta tags against these scenarios so that you can search and the scenarios, again, in a very user defined mechanism. So for example, if you wanted to create a subset of your test suite, you could do it entirely on the basis of some of these attributes. For example, run the P1 test cases, which meet the small criteria.
The third screenshot I have is that of a report that comes out of QAS. QAS automatically captures your screens from the device that is actually existing. That device exists in the cloud, Perfecto cloud. And we automatically show all the assertions that you’re making. Basically, we show all of the steps that comprise the scenario. And underneath the steps, in the step code, if there are any assertions being made, the report automatically picks up on them and shows you which assertion is passing, which assertion is failing. It also captures screenshots for failed assertions or even past assertions that you configured.
So it does several other things like it shows you the trends of past failed test cases, and it allows you to see the environment at length, etc. like I said, I would be actually giving a very detailed demonstration overview of this solution in a follow up webinar, so stay tuned for that. With that, I want to close the slide deck part of the webinar. And we are open for questions. And Carlo, myself, and Mike will be taking all of your questions.
Leila Modarres: Thank you very much, Manish, and thank you, everyone, for your participation. We hope that you found the presentation useful. If you have any questions, are contact information should be provided on the screen. I know a number of you folks may have posted some questions. And we tried to get back to everybody, but if we haven’t gotten back to you, we’ll try to reply offline. And also, last but not least, if you missed any part of this webinar, there will be a recording available on the Info Stretch website and across all of our social media sites. So tune in very soon. Thank you, everyone, and have a great day.