Achieving Continuous Quality with Mobile Apps

Achieving Continuous Quality with Mobile Apps

Insights from Forrester Research and Perfecto

Leila Modarres: Thank you for joining us. Today’s topic is continuous quality. We’ll be discussing what this means when it comes to the mobile application development lifecycle and how your organization can effectively maintain continuous quality to stay agile and stay ahead. We are very excited to bring you our featured speakers today. This includes Info Stretch’s own chief technology officer, Manish Mathuria. Manish will be joined by Carlo Cadet, product lead at Perfecto Mobile, the leader in mobile application quality. And we’ll also be getting perspectives from Michael Facemire who is the principal analyst at Forrester Research and the leading expert on mobile software development.

This is a two part presentation series. So Part 2 of this webinar will be a firsthand demo of Info Stretch’s own solution hosted by Manish Mathuria. Just a quick note before introducing our speakers, you will have the ability to ask questions by using your question pane. So simply type in your question, and click send. If we don’t respond to your question during the webinar, we’ll make every effort to get back to you individually after the session. So with that, I would like to turn it over to Michael Facemire.

Michael Facemire: Thanks, and thanks, everybody, for joining today. As we talked about, or as was mentioned here, we’re talking about continuous quality and how to drive continuous quality in mobile. Given my background, here at Forrester, I’m the principal analyst that covers web and mobile development. But I’ve also been a mobile developer for a long part of my life going all the way back to the pocket PC, Windows CE, Palm Pilot days. And we’ve seen a big transition happen from those early days of mobile into the current state of mobile. But one of the biggest things that’s happened is the demand for mobile everything.

And I’m sure those of you on the call are hearing this that your customers want a mobile version of what you have. Your internal business wants mobile versions of their current business prophecies and business tools. And they’re also asking for it at a rate that has never been asked for before. In the past, they may have asked for something to be delivered tomorrow. But there was always a little chuckle to it. Now, when they say we want a mobile version of your ERP tooling, they don’t laugh when they say they want it tomorrow because, if you don’t deliver it tomorrow, they’ll find another option.

And that’s one of the biggest drivers of change that we’re seeing in mobile. And so not only do we have to deliver it immediately, but we also have to continue to update it and continue to keep pace with the changing demands for it. And then, on top of all of that, kind of an underlying current is it still has to have that enterprise level of quality that we’re all used to. And so if we think about continuous quality, the word continuous doesn’t mean it starts at some point and continues past that. Continuous means from literally Step 1. And so to do that, in our research, what we found for folks that have been able to deliver continuous quality is that they start from Day 1.

And they, literally, take the concept of quality and move it to the left. And so what does move to the left mean? When we kind of display a project in a GANT chart or in some of the project tooling, it always goes in a timeline from left to right. So when we say move quality to the left that means quality has to be a part of what you do from Day 1. So a developer coming up to me and saying, well, it works on my machine, or it was fast enough in my environment, my environment that was on a perfect Wi-Fi network that was unencumbered by other users with a pristine device without any other apps installed on it, that’s just not good enough these days.

We have to move quality and the monitoring of quality to the left. Similarly, performance is part of a raw quality. So understanding what our KPI’s, our key performance indicators, our key business indicators, knowing what they are, being able to quantify those, and moving those to the left, that’s equally important. So many times, I go into development shops in my job here at Forrester, and I see that they’re in Sprint 3 or Sprint 4 or Iteration 4, however you term it. And I ask what does the performance look like right now. And they say we don’t start checking for performance until we’re ready to ship because not everything is in place yet.

So the numbers will vary dramatically. And the reality is if you’re not checking for that all of the time, I guarantee you you’ll make decisions that don’t take performance into consideration. And, therefore, those end up being very, very hard to fix later on down the line. And then, finally, this is one that’s near and dear to my heart as well, in addition to quality, in addition to performance, a normalized way of accessing data through standard, consistent, consumable API’s. Move that to the left as well, and make sure that you have a consistent way of accessing data all of the time. So with that in mind, that’s changing how we write software.

So I had mentioned earlier the timeframe in which we have to build software is changing significantly. So when I first started professionally writing software back in 1997 timeframe, we had a very, very waterfall driven process. The overall process took anywhere between 12 and 18 months. Everyone in every discipline knew when they were kind of in the spotlight, when the design team was doing their thing, and when the development team was doing their thing, and when the QA team was doing their thing. And when it wasn’t your time, you kind of just sat back and had an easier life.

Well, now, that’s changed considerably because now, what we see for successful mobile is it tends to take anywhere between three weeks and two months, maybe upwards of three months for a given project from time of inception to time of delivery. And so therefore, that has to change the SDLC considerably. Early on, a couple of years ago, when a lot of folks were just getting into the modern wave of mobility, what that meant was they simply would cut off quality at the end. They said our standard waterfall process, we could get design and development done in those first three months.

And then, we would spend the next three to six months on QA. And well, so they would just whack that last step. And so they would shift to the market with no quality at all. Obviously, that’s a terrible idea. And we need to move quality to left. And that’s changing our SDLC. And one of the biggest things it’s changing is this. We’re moving away from a model in which some old guys sitting in a conference room with white hair that look like these two fellows that influenced my childhood significantly, they would sit and come up with the requirements on their own in a vacuum because we knew what users wanted.

And we knew what we needed to deliver to them. Now, the modern version of this is a very feedback driven lifecycle. So in the past, a project was done when we met a certain number of requirements. These folks on the previous slide would come up with 70 requirements, and we would only ship when 68 of those requirements were done at a certain level of quality. And that quality was defined by test cases that were defined at the same time. Defined before any development was done. So 700 test cases that were very structured and very regimented about what would define quality of this project.

Well, with mobile, that’s really, really difficult to do because you don’t know how folks are going to use your app until it’s actually in their hand in the wild. For example, we saw one app where a company wanted to provide coupons to folks when they were in the mall. And what they found was they would be on display on the coupon screen, but nobody ever accepted it. And only once they actually saw people using it in the mall, they realized folks would be pushing a cart. Folks would be pushing their stroller. And the accept button was in the far upper left of the screen, and so most right handed people couldn’t get their thumb up there.

So they would see the coupon and not be able to click it simply because it was a little bit too hard. And so they would just hit the home button and make it go away because it wasn’t usable. And so only until you actually see it in the wild do you know what the real requirements are. And it might be just a subtle change of where a button is put. But it’s getting the feedback from your users is critical. And so this is what we’re seeing the feedback driven lifecycle looks like. It’s defining objectives. Establishing and being able to quantify the performance indicators, the quality indicators. Create a base minimum viable product.

Quantify that feedback from your users using feedback tools, watching users use it in real life. Align that feedback with the initially defined KPI’s. And then, rinse and repeat. And this is how we’re seeing folks really respond to the changes in the mobile SDLC. So with that in mind, Manish, question for you. As folks are making this transition into a feedback driven lifecycle, first, are you seeing this move made? Or are you seeing folks try to do this in a different way? Are they trying to address the time and market challenge and the changes to the SDLC in a different way?

And if they are moving to a feedback driven lifecycle, how is that affecting them? And how are you at Info Stretch able to kind of take advantage of this?

Manish Mathuria: So Mike, that’s a great question. And most certainly, we are seeing people realizing the need to shift left and also to introduce the feedback. In fact, the market is pushing them. The example you gave is a very pertinent one where the market or the customer is asking them to incorporate feedback in the software driven lifecycle.

In my presentation, I will certainly be talking about some of the techniques that the development team or the entire team can incorporate in their process, the software development process that helps them move things left and incorporate more feedback and keep the alignment between different parties, QA’s, developers, etc., continuously on the same target. So the answer to your question is certainly yes. It’s happening in SCRUM teams. It’s happening from the customer side. But feedback is extremely important.

Michael Facemire: Yeah. And are there key challenges that folks are running into that you’re seeing that you’re able to help with?

Manish Mathuria: Yes. And the direct manifestation of this is on the software development lifecycle. Teams are not used to getting this feedback in real time. And forget market feedback. A lot of it is actually feedback within themselves. How do you keep the testers, developers, product owners, business analysts, etc., on the same page with respect to continuously changing requirements? And the agility that the market demands is fairly critical. And like I said, I’ll be talking about some techniques that we have developed, helped develop with our customers that actually have produced very good results.

Michael Facemire: Gotcha. Good deal. Carlo, from the Perfecto side, can you shed some light on shifting quality to the left and what that means from your perspective and some of the interesting challenges that you’ve seen from the mobile world as folks try to move quality to the left in their day to day SDLC?

Carlo Cadet: Sure. Let me, perhaps, comment on two aspects, Mike. So 1) we’re seeing more and more organizations move to what others call a Dev Test Construct where really A) developers are taking on more responsibility to do expanded testing, and B) that QA teams are actually shifting their processes to really align with a Dev organization. For example, instead of using a commercial tool with a scripting language, they’re shifting now to writing their own test cases in Java alongside developers.

And that’s really making the QA role a far more technical role and more developer-like. And this drives alignment from the beginning between coding and testing and creating a synchronized process. And then, perhaps the second thing in terms of a feedback driven lifecycle, as you said, so organizations are moving to deliver software faster.

And we’re seeing a rising number of people that embrace continuous integration as a fundamental strategy where they’re automating the bill process in that as soon as new code is committed, it triggers a build and test process essentially trying to shrink the time of unknown quality between a change and a confirmed verification that it actually has achieved the outcome. And so these are two areas that we’ve seen happening in the marketplace. 1) A move towards the Dev Test model. And 2) the embrace of continuous integration.

Michael Facemire: Yeah. That’s a great point. That move to the Dev Test model is one that’s near and dear to my heart. As a developer myself, I’ll tell you that nothing real slows down a developer like telling him stop what you’re doing and context switch back to what you did a week ago when you checked in some code because we just now realized that that code you checked in a week ago slowed things down or broke something. So that continuous Dev Test cycle is incredibly important because nothing destroys developer productivity worse than context switching there. Now, curious of your thoughts – your relationship with Info Stretch, how is that benefiting folks with regards to this feedback driven lifecycle.

Carlo Cadet: Absolutely. I think that’s a great question, Mike. Our Info Stretch relationship is multifaceted. And we’re going to learn a little bit more in the webinar in terms of from the technology aspect. There are really two parts that I’ll stress. 1) Is we’ve been using the phrase continuous quality that really nails our perspective and provides the foundation for the integration that we have as a technology partner with Info Stretch where we are providing our continuous quality lab, which is comprised of real devices hosted in a cloud configuration for quality purposes that Info Stretch, with our API, is not taking advantage of to support their test authoring solution.

And, in particular, what I think to this point of shifting left, their support of behavior driven development, which really starts the quality process at inception. So it’s really an exciting technology relationship between the two organizations.

Michael Facemire: Good deal. So at this point, I’ll pass it over to you to expound upon that a bit.

Carlo Cadet: Sure. Thanks for that. So one of the areas here that I want to transition a little bit, Mike, is as we talk about shifting to the left, a fundamental part of the conversation is really about accelerating velocity. We’re shifting to the left and starting the quality process earlier with the fundamental goal of delivering product to the market faster. And this has to do with that feedback driven lifecycle. As opposed to development cycles that were taking 12 to 18 months, we’re seeing more and more organizations shift from annual releases to quarterly releases and, increasingly, monthly releases.

But as they make that transition, Mike, they’re running into some challenges. And I’ve encapsulated on this slide what I’m calling velocity blockers as it relates to delivering better software faster. The first is the realization or the recognition that manual testing, while critical and it plays an important role, is non scalable. And what it really means is that the more manual testing that I do, it’s going to slow me down. And, therefore, the inverse is true, which is the more that I automate, the greater the ability I have to accelerate my process and to take advantage of key techniques such as continuous integration.

A second blocker that we find for many organizations has, again, to do with time, or more specifically, the tradeoff between available time and coverage. And coverage here, Mike I define as both test case coverage as well as device coverage where I might be exercising the full test suite but only doing it on two devices. Although, I know, in truth, that my user population is to cover 50 percent of my users, I’ll need perhaps 30 some odd devices to test. But I’m consciously making a risk based decision to only test on two or perhaps even four devices. And so coverage is really a velocity blocker in the sense that it’s challenging that time and forcing it into a risk based approach.

The third area that we find challenging and uniquely related to mobile is the test lab itself. And really is the test lab in a test ready, always on capability? As we accelerate the process and shift testing left, that means we’re simply doing a rising number of test iterations themselves, which places a greater need for lab availability. Some of the reports that are out there indicate that lab availability is one of the common delay factors within the quality process. And then, the fourth area has to do with multiple teams. What we’re seeing in the customers that we work with, which are frequently large enterprises that development is not one group of eight people in one location.

Rather, it’s done in a highly distributed fashion where they might have several dev centers and, perhaps, a quality COE located offshore. But the key aspect there is to ensure that whatever potential quality impediments are found, whatever defects are found, they need to be efficiently communicated in a way that supports reproducibility and, ultimately, resolution. And so collaboration and providing the right artifacts or, for some in the audience who might know for their developers who label certain defects as simply non reproducible, yes, you might have found something, but, no, I can’t reproduce it.

And, therefore, I’m going to move on. That area is a key velocity blocker because it sustains a growing quality debt through the process. And then, the last element that slows velocity, in particular, is slow feedback. As you mentioned at the outset with shift left, and I think shifting testing or shifting quality left is really a critical idea, Mike, and I’m glad you started there. It has to do with also bringing quality into the main process, into the main development cycle. So really, anything that is done out of cycle, out of that primary development cycle, by definition, delivers feedback slower than had it been included.

And so a good example is, perhaps, after a full test suite is accomplished through sanity testing and a regression suite and even, perhaps, compatibility that that’s when performance testing starts. Well, that’s really late in the cycle. And really, it potentially creates an opportunity to challenge a go/no go decision if there’s unknown quality and questions as it relates to performance because many organizations recognize that users – a very common complaint is simply this app is not performing as fast as I expected it to. So collectively, these areas represent velocity blockers. And so what I want to move, if I can get my computer to participate, is I want to talk about how to unleash velocity.

And really, it starts with automating the process and automating the testing. Many of our organizations are very familiar with the automation test pyramid, an idea that was introduced a number of years ago but really are still in the place where they’re trying to put all of the pieces together. And what that really means is being able to start the process of automating the testing program at the inception. And that’s really when the build occurs being able to find out, for mobile apps in particular, whether that app will run on the device, whether a basic set of smoke tests can be done to execute a sanity for the build.

And then, to discretely exercise both back end testing as well as UI testing. And part of the challenge that we find with many organizations is while they pursue automation, they struggle in areas. And they struggle in terms of developing automation that works. So it’s really critical from unleashing velocity to have a code strategy with your testing that is comparable to your code strategy that you’re going to deliver in production. And that means being able to exercise common techniques such as class libraries or component libraries, or being able to reject the device coverage tradeoff by executing in parallel, and also, being able to control not only the application under test but the device under test.

When we put all of these pieces together, that’s when we see that organizations are able to successfully move from perhaps a 10 percent level of automation to 70 or 80 percent, and, really, being able to put it together with real devices in a test read configuration. And those have a degree of attributes with being able to leverage real devices that are in the markets that our end users are at. And by putting these pieces together and now, thinking about shifting left, Mike, where it comes into the table is how do I do it faster? And how do I do it earlier? And that really starts at the commit level when I’m committing the code.

And many are beginning to adopt continuous integration and then, recognizing the need to accelerate that out of cycle testing that I talked about earlier and bringing it into the cycle. By being able to embed, for example, basic performance data within every test case that’s executed, by embedding timers, or by simply working to vary the test conditions, the networking conditions, to mimic real user behaviors.

When we put all of these pieces together, this is when organizations have successfully constructed the recipe to unleash the velocity by automating both the process with continuous integration and automating the testing, which is both really functional and nonfunctional testing and building that off of a foundation of an always ready lab. And then, lastly, this is just a short overview from the Perfecto Mobile perspective of the components of our solution that Info Stretch builds upon to deliver their test automation, their offering solution, as well as executing BDD.

And it’s when these combine together, this is how organizations are able to accelerate their velocity to deliver continuous quality at an enterprise scale. And with that, that’s what I wanted to share those ideas as we transition to Manish who is going to talk a little further with regards to the integration that they’re bringing to the market leveraging our continuous quality lab.

Manish Mathuria:

Thank you very much, Carlo. And thanks, Mike, for doing a great introduction to the topic today that we are talking about. So let me start from where Carlo left off here. Let me first share my screen. All right. So Carlo, I want to talk about the point you talked about where you’re seeing people building and releasing on a weekly or a monthly basis. Actually, not so much in the mobile world, but in the SAS world, we are seeing people releasing on a daily basis. So the emphasis that is put on automation or the ask that is put on his, typically, is that I want to release multiple times a day.

And, therefore, you better get the entire test suite with be done in an hour or so so that I can and release to production. That’s where we are coming to. Of course, with respect to the mobile apps, it is not so much that easy because app stores, etc., are involved in the process. So what that puts pressure on is we have to start thinking about security, about performance, about automation from Day 1 when before we write the first line of code. And that’s like an extreme example of shifting left. So anyway, what I’ll be talking about more is what these challenges bring. I’ll make things a little bit more practical.

I’ll take these concepts home and talk about what challenges agility brings and this particular process brings, and what are some of the approaches to incorporate the feedback cycle in your day to day software development lifecycle and what concepts would help it. I’ll also be talking or showing you certain screenshots, which is a technical solution to automation, which is a joint solution between Perfecto and Info Stretch and giving you a sneak preview of what this solution looks like via a few screenshots and, like Leila said, there will be a follow up presentation or a follow up webinar in which we will do a detailed demo.

So let’s jump straight into it. As we all know, when we start work with agility, there are different parties to the campaign. There’s a product owner. There’s a tester. There’s a developer. And there’s an automation team, and of course, there are multiple people who participate in this. And the challenges that come from this are that working in one team as a product owner produces you a story, then, testers and developers write code. And the automation teams write automation against it.

As time progresses, and as requirements morph through the release cycles, it is a difficult catch up game because, in a two week sprint, you are required not only to write all of the new functional test cases, keep your regression automation up to date, manage all of the code related changes, and do it every sprint cycle, which could be a week or two week cycle. So pretty soon, what we start to observe is that your code and your automation code starts to diverge.

And because your automation is returning a language which is not directly proportional or not directly related to your test cases that are returned primarily in English or in any test language and in a natural language, you often cannot tell how related the automated code is to the test cases. So this starts to diverge, right? Behavior driven development is one approach that is a very strong solution to this particular problem. And what it states is something very simple. What we mean by behavior driven development or test driven development or there are several terms or specifications by example is that you will keep the user stories, acceptance criteria, and the test scenarios closely knit together.

So you will write user stories. You will write acceptance criteria. You will convert these acceptance criteria into automatable scenario. And it’s precisely these automation scenarios that you write, which are returned in English that will get automated. So as a result, what you have is you have this continuously living documentation system that is designed not to produce any divergence. Hence, the feedback loop that should happen from keeping the automation code completely in line with the test specs, and the test specs completely in line with the user stories is defined by design.

So let’s look at what this looks like in real life. So this is a user story. And I won’t go into the details of the user story. You can read it. But bottom line is all it says is that if a room guest returned or cancelled, it should go back to the inventory. Now, typically, this user story would be converted to test cases, which are returned in Excel. And right there, you are introducing a cause for divergence. From that point on that Excel test case will probably be converted into Java code or a QTP code or something like that. Again, that’s, yet, another point where you are introducing divergence.

So as the user story changes, the test specs change, and the code changes. Pretty soon, you don’t know really what you’re testing or what your automating. And when you certify a build through continuous integration, you exactly don’t know what you’re testing. So what BDD said is that you would write a scenario that looks very much like what you have on the screen right now. And you will keep the scenario very close to the user story, perhaps in the same documentation system as or whatever you use for your agile management. And you will keep these test scenarios pretty close.

And furthermore, these test scenarios are exactly what gets automated. So here, on the screen, I have an example where each statement of the scenario has a driver to implement that statement. And that scenario is exactly getting automated by virtue of writing drivers for each of the statements. Now, this brings yet another benefit to automation, which is not easily observable here. But as Carlo mentioned, just like software development, reuse componentization, etc., are right principles for coding automation. So in other words, you don’t want to create automation that, for each test case, is completely disjoined from each other.

You want to write several reusable components that get used over and over again. Therefore, when the code changes, you are changing it in one place and not changing it in 1,000 places. If you think about it, what BDD does also is that it automatically introduces the use because, once you write a test spec, or you automate a particular step in this scenario, by definition, it is reused. So when I create a library of test steps that library of 100 test steps can be reused for thousands of BDD scenarios. Therefore, if my code changes or my requirements change, I have to change a minimal number of test steps.

So let’s look at what is the recipe for continuous quality. So I will try to summarize a few of the concepts we discussed today and then, jump straight into showing you some of the screenshots of QAS and the solution that we have on the table here. First, the one thing we talked about is continuous feedback. Continuous feedback is from two angles. One is from the customers. So there are techniques that are a method that a team can deploy to automate the feedback from the customers by introducing several technologies in the app, which is outside of the scope of our current.

However, internally, we are within your SCRUM team, living documentation and specification by example, which are synonymous for BDD are, by definition, introducing continuous feedback in your software development lifecycle. Second, continuous engagement means that the SCRUM team is always on one page. It is not that the product owner is saying one thing, the developer understands something else, and the tester writes specs for something else, and the automation is trying to play a catch up game with all of these things. What it means is that they system allows all of these parties to speak one language.

And when they say that they have a particular user story that is being tested in a certain way, it is always true. So creating a system of engagement and a process of engagement that actually allows all of these parties to talk together is a strong tenet of continuous quality. The next thing is continuous integration. Carlo mentioned and signified the importance of continuous integration. But we say it this way. If you are not automating the process of automation, your automation is useless. It is mindless to create automated test cases if you are getting to execute them by hand.

The requirement that business puts on you to release your code multiple times a day or make it multiple times a week or multiple times a month is not going to be achieved if you are going to execute your automation by hand. So continuous integration is as important or, perhaps, even more important than actually automating your test cases in the first place. So you automated test cases, as well as your build should be integrated in one tight system where, whenever a build happens, and that could happen multiple times a day, your test cases run right alongside.

And finally, a need for having a continuous environment, so you can catch the drift here by using continuous quality or requiring continuous quality, these several continuous elements have to fall in place. And continuous environment is always on demand, availability of real devices that you can tap on whenever required, whenever continuous integration runs, such that these test cases can be actually executed as you need them. So let’s look at the solution elements. There are two critical components that were going to be talking about. QAS is QMetry Automation Studio.

And Perfecto Mobile cloud Selenium driver is a critical component that integrates the two. That’s the glue between the mobile automation and your BDD test cases. So let’s look at how. QMetry Automation Studio is, basically, an Eclipse based tool that allows that essentially is an authoring platform for creating BDD based test cases that can be automated using multiple drivers. It could be Perfecto drivers. Or it could be any other drivers that you automate your test cases against. Being an authoring tool, we have all of the best practices and principles built into the tool such that you can use through data driven testing, consume your data from any kid of CSV file.

It is a very extensive reporting element built into it. It promotes and encourages and, sometimes, enforces usage of the right kind of design patterns, spacing patterns to be used from the tool itself. And it makes development of BDD extremely easy by creating a very user interface driven method to drag and drop these test steps onto a BDD. I’ll show you some of the screen shots next. So this is what the solution components look like together. So what’s in yellow here is the QAS platform, which a test authoring layer, which allows you to create BDD or Java driven test cases.

And then, there is a fundamental player underneath, which is a repository of your objects, repository of your test steps, of other object libraries that you create to automate and make your automation reusable. And underneath that, there is a set of drivers, and we’ll talk about Perfecto Selenium Driver today, which allows you to actually execute these tests at one time on Perfecto cloud devices via an HTTP rest protocol that is executed every time a test case is executed. So here are some of the screenshots. What this screenshot talks about is the basic structure of QAS as a tool.

What you see to the left is a structure and is a set of format that your QAS project forms, which has clearly delineated areas for storing your resources, for storing your scenarios, and for storing your source code and so on. By being an Eclipse tool and Perfecto Selenium driver has an Eclipse plug-in, and, therefore, it fits nicely together. So what you see to the right here is the Perfecto plug-in in the mobile cloud perspective of Eclipse where we can open a device. We can involve the object and all other mechanisms through which we can create the object repository.

We can get to a particular screen on the app and point and click at a specific object. And it helps us pick up the object. And it suggests what the object locator should be with respect to which we can actually execute that as the case. And it enables you to build the object repository that you see in the middle here, which is a very structured representation of your object by giving it a particular key that gets used in the code, the locator, and a user defined description. By the way, all of these things are, so you can edit it the way you want. And the description is what shows up in the report.

So this is how you pretty much build the object repository that gets used in the code. The next screen shows you the BDD perspective or the QAS perspective, as we call it. And this is a user driven or user defined, user interface driven method of creating your BDD. So it helps you create BDD’s in two main ways. One is you can type the BDD steps or scenario steps rather, and it has look ahead. And it completes your typing by picking up the existing steps that are already defined in your framework. Or you can drag and drop these steps from what you see as your step repository onto your scenario.

And, therefore, you can build the BDD scenarios in a very interactive mechanism. Your scenarios can be completely data driven. So you can define the data in an XML way, in a CSV way, again, in a very interactive manner. And you can also define several user defined attributes or meta tags against these scenarios so that you can search and the scenarios, again, in a very user defined mechanism. So for example, if you wanted to create a subset of your test suite, you could do it entirely on the basis of some of these attributes. For example, run the P1 test cases, which meet the small criteria.

The third screenshot I have is that of a report that comes out of QAS. QAS automatically captures your screens from the device that is actually existing. That device exists in the cloud, Perfecto cloud. And we automatically show all the assertions that you’re making. Basically, we show all of the steps that comprise the scenario. And underneath the steps, in the step code, if there are any assertions being made, the report automatically picks up on them and shows you which assertion is passing, which assertion is failing. It also captures screenshots for failed assertions or even past assertions that you configured.

So it does several other things like it shows you the trends of past failed test cases, and it allows you to see the environment at length, etc. like I said, I would be actually giving a very detailed demonstration overview of this solution in a follow up webinar, so stay tuned for that. With that, I want to close the slide deck part of the webinar. And we are open for questions. And Carlo, myself, and Mike will be taking all of your questions.

Leila Modarres: Thank you very much, Manish, and thank you, everyone, for your participation. We hope that you found the presentation useful. If you have any questions, are contact information should be provided on the screen. I know a number of you folks may have posted some questions. And we tried to get back to everybody, but if we haven’t gotten back to you, we’ll try to reply offline. And also, last but not least, if you missed any part of this webinar, there will be a recording available on the Info Stretch website and across all of our social media sites. So tune in very soon. Thank you, everyone, and have a great day.

Insights from Forrester Research and Perfecto

Leila Modarres: Thank you for joining us. Today’s topic is continuous quality. We’ll be discussing what this means when it comes to the mobile application development lifecycle and how your organization can effectively maintain continuous quality to stay agile and stay ahead. We are very excited to bring you our featured speakers today. This includes Info Stretch’s own chief technology officer, Manish Mathuria. Manish will be joined by Carlo Cadet, product lead at Perfecto Mobile, the leader in mobile application quality. And we’ll also be getting perspectives from Michael Facemire who is the principal analyst at Forrester Research and the leading expert on mobile software development.

This is a two part presentation series. So Part 2 of this webinar will be a firsthand demo of Info Stretch’s own solution hosted by Manish Mathuria. Just a quick note before introducing our speakers, you will have the ability to ask questions by using your question pane. So simply type in your question, and click send. If we don’t respond to your question during the webinar, we’ll make every effort to get back to you individually after the session. So with that, I would like to turn it over to Michael Facemire.

Michael Facemire: Thanks, and thanks, everybody, for joining today. As we talked about, or as was mentioned here, we’re talking about continuous quality and how to drive continuous quality in mobile. Given my background, here at Forrester, I’m the principal analyst that covers web and mobile development. But I’ve also been a mobile developer for a long part of my life going all the way back to the pocket PC, Windows CE, Palm Pilot days. And we’ve seen a big transition happen from those early days of mobile into the current state of mobile. But one of the biggest things that’s happened is the demand for mobile everything.

And I’m sure those of you on the call are hearing this that your customers want a mobile version of what you have. Your internal business wants mobile versions of their current business prophecies and business tools. And they’re also asking for it at a rate that has never been asked for before. In the past, they may have asked for something to be delivered tomorrow. But there was always a little chuckle to it. Now, when they say we want a mobile version of your ERP tooling, they don’t laugh when they say they want it tomorrow because, if you don’t deliver it tomorrow, they’ll find another option.

And that’s one of the biggest drivers of change that we’re seeing in mobile. And so not only do we have to deliver it immediately, but we also have to continue to update it and continue to keep pace with the changing demands for it. And then, on top of all of that, kind of an underlying current is it still has to have that enterprise level of quality that we’re all used to. And so if we think about continuous quality, the word continuous doesn’t mean it starts at some point and continues past that. Continuous means from literally Step 1. And so to do that, in our research, what we found for folks that have been able to deliver continuous quality is that they start from Day 1.

And they, literally, take the concept of quality and move it to the left. And so what does move to the left mean? When we kind of display a project in a GANT chart or in some of the project tooling, it always goes in a timeline from left to right. So when we say move quality to the left that means quality has to be a part of what you do from Day 1. So a developer coming up to me and saying, well, it works on my machine, or it was fast enough in my environment, my environment that was on a perfect Wi-Fi network that was unencumbered by other users with a pristine device without any other apps installed on it, that’s just not good enough these days.

We have to move quality and the monitoring of quality to the left. Similarly, performance is part of a raw quality. So understanding what our KPI’s, our key performance indicators, our key business indicators, knowing what they are, being able to quantify those, and moving those to the left, that’s equally important. So many times, I go into development shops in my job here at Forrester, and I see that they’re in Sprint 3 or Sprint 4 or Iteration 4, however you term it. And I ask what does the performance look like right now. And they say we don’t start checking for performance until we’re ready to ship because not everything is in place yet.

So the numbers will vary dramatically. And the reality is if you’re not checking for that all of the time, I guarantee you you’ll make decisions that don’t take performance into consideration. And, therefore, those end up being very, very hard to fix later on down the line. And then, finally, this is one that’s near and dear to my heart as well, in addition to quality, in addition to performance, a normalized way of accessing data through standard, consistent, consumable API’s. Move that to the left as well, and make sure that you have a consistent way of accessing data all of the time. So with that in mind, that’s changing how we write software.

So I had mentioned earlier the timeframe in which we have to build software is changing significantly. So when I first started professionally writing software back in 1997 timeframe, we had a very, very waterfall driven process. The overall process took anywhere between 12 and 18 months. Everyone in every discipline knew when they were kind of in the spotlight, when the design team was doing their thing, and when the development team was doing their thing, and when the QA team was doing their thing. And when it wasn’t your time, you kind of just sat back and had an easier life.

Well, now, that’s changed considerably because now, what we see for successful mobile is it tends to take anywhere between three weeks and two months, maybe upwards of three months for a given project from time of inception to time of delivery. And so therefore, that has to change the SDLC considerably. Early on, a couple of years ago, when a lot of folks were just getting into the modern wave of mobility, what that meant was they simply would cut off quality at the end. They said our standard waterfall process, we could get design and development done in those first three months.

And then, we would spend the next three to six months on QA. And well, so they would just whack that last step. And so they would shift to the market with no quality at all. Obviously, that’s a terrible idea. And we need to move quality to left. And that’s changing our SDLC. And one of the biggest things it’s changing is this. We’re moving away from a model in which some old guys sitting in a conference room with white hair that look like these two fellows that influenced my childhood significantly, they would sit and come up with the requirements on their own in a vacuum because we knew what users wanted.

And we knew what we needed to deliver to them. Now, the modern version of this is a very feedback driven lifecycle. So in the past, a project was done when we met a certain number of requirements. These folks on the previous slide would come up with 70 requirements, and we would only ship when 68 of those requirements were done at a certain level of quality. And that quality was defined by test cases that were defined at the same time. Defined before any development was done. So 700 test cases that were very structured and very regimented about what would define quality of this project.

Well, with mobile, that’s really, really difficult to do because you don’t know how folks are going to use your app until it’s actually in their hand in the wild. For example, we saw one app where a company wanted to provide coupons to folks when they were in the mall. And what they found was they would be on display on the coupon screen, but nobody ever accepted it. And only once they actually saw people using it in the mall, they realized folks would be pushing a cart. Folks would be pushing their stroller. And the accept button was in the far upper left of the screen, and so most right handed people couldn’t get their thumb up there.

So they would see the coupon and not be able to click it simply because it was a little bit too hard. And so they would just hit the home button and make it go away because it wasn’t usable. And so only until you actually see it in the wild do you know what the real requirements are. And it might be just a subtle change of where a button is put. But it’s getting the feedback from your users is critical. And so this is what we’re seeing the feedback driven lifecycle looks like. It’s defining objectives. Establishing and being able to quantify the performance indicators, the quality indicators. Create a base minimum viable product.

Quantify that feedback from your users using feedback tools, watching users use it in real life. Align that feedback with the initially defined KPI’s. And then, rinse and repeat. And this is how we’re seeing folks really respond to the changes in the mobile SDLC. So with that in mind, Manish, question for you. As folks are making this transition into a feedback driven lifecycle, first, are you seeing this move made? Or are you seeing folks try to do this in a different way? Are they trying to address the time and market challenge and the changes to the SDLC in a different way?

And if they are moving to a feedback driven lifecycle, how is that affecting them? And how are you at Info Stretch able to kind of take advantage of this?

Manish Mathuria: So Mike, that’s a great question. And most certainly, we are seeing people realizing the need to shift left and also to introduce the feedback. In fact, the market is pushing them. The example you gave is a very pertinent one where the market or the customer is asking them to incorporate feedback in the software driven lifecycle.

In my presentation, I will certainly be talking about some of the techniques that the development team or the entire team can incorporate in their process, the software development process that helps them move things left and incorporate more feedback and keep the alignment between different parties, QA’s, developers, etc., continuously on the same target. So the answer to your question is certainly yes. It’s happening in SCRUM teams. It’s happening from the customer side. But feedback is extremely important.

Michael Facemire: Yeah. And are there key challenges that folks are running into that you’re seeing that you’re able to help with?

Manish Mathuria: Yes. And the direct manifestation of this is on the software development lifecycle. Teams are not used to getting this feedback in real time. And forget market feedback. A lot of it is actually feedback within themselves. How do you keep the testers, developers, product owners, business analysts, etc., on the same page with respect to continuously changing requirements? And the agility that the market demands is fairly critical. And like I said, I’ll be talking about some techniques that we have developed, helped develop with our customers that actually have produced very good results.

Michael Facemire: Gotcha. Good deal. Carlo, from the Perfecto side, can you shed some light on shifting quality to the left and what that means from your perspective and some of the interesting challenges that you’ve seen from the mobile world as folks try to move quality to the left in their day to day SDLC?

Carlo Cadet: Sure. Let me, perhaps, comment on two aspects, Mike. So 1) we’re seeing more and more organizations move to what others call a Dev Test Construct where really A) developers are taking on more responsibility to do expanded testing, and B) that QA teams are actually shifting their processes to really align with a Dev organization. For example, instead of using a commercial tool with a scripting language, they’re shifting now to writing their own test cases in Java alongside developers.

And that’s really making the QA role a far more technical role and more developer-like. And this drives alignment from the beginning between coding and testing and creating a synchronized process. And then, perhaps the second thing in terms of a feedback driven lifecycle, as you said, so organizations are moving to deliver software faster.

And we’re seeing a rising number of people that embrace continuous integration as a fundamental strategy where they’re automating the bill process in that as soon as new code is committed, it triggers a build and test process essentially trying to shrink the time of unknown quality between a change and a confirmed verification that it actually has achieved the outcome. And so these are two areas that we’ve seen happening in the marketplace. 1) A move towards the Dev Test model. And 2) the embrace of continuous integration.

Michael Facemire: Yeah. That’s a great point. That move to the Dev Test model is one that’s near and dear to my heart. As a developer myself, I’ll tell you that nothing real slows down a developer like telling him stop what you’re doing and context switch back to what you did a week ago when you checked in some code because we just now realized that that code you checked in a week ago slowed things down or broke something. So that continuous Dev Test cycle is incredibly important because nothing destroys developer productivity worse than context switching there. Now, curious of your thoughts – your relationship with Info Stretch, how is that benefiting folks with regards to this feedback driven lifecycle.

Carlo Cadet: Absolutely. I think that’s a great question, Mike. Our Info Stretch relationship is multifaceted. And we’re going to learn a little bit more in the webinar in terms of from the technology aspect. There are really two parts that I’ll stress. 1) Is we’ve been using the phrase continuous quality that really nails our perspective and provides the foundation for the integration that we have as a technology partner with Info Stretch where we are providing our continuous quality lab, which is comprised of real devices hosted in a cloud configuration for quality purposes that Info Stretch, with our API, is not taking advantage of to support their test authoring solution.

And, in particular, what I think to this point of shifting left, their support of behavior driven development, which really starts the quality process at inception. So it’s really an exciting technology relationship between the two organizations.

Michael Facemire: Good deal. So at this point, I’ll pass it over to you to expound upon that a bit.

Carlo Cadet: Sure. Thanks for that. So one of the areas here that I want to transition a little bit, Mike, is as we talk about shifting to the left, a fundamental part of the conversation is really about accelerating velocity. We’re shifting to the left and starting the quality process earlier with the fundamental goal of delivering product to the market faster. And this has to do with that feedback driven lifecycle. As opposed to development cycles that were taking 12 to 18 months, we’re seeing more and more organizations shift from annual releases to quarterly releases and, increasingly, monthly releases.

But as they make that transition, Mike, they’re running into some challenges. And I’ve encapsulated on this slide what I’m calling velocity blockers as it relates to delivering better software faster. The first is the realization or the recognition that manual testing, while critical and it plays an important role, is non scalable. And what it really means is that the more manual testing that I do, it’s going to slow me down. And, therefore, the inverse is true, which is the more that I automate, the greater the ability I have to accelerate my process and to take advantage of key techniques such as continuous integration.

A second blocker that we find for many organizations has, again, to do with time, or more specifically, the tradeoff between available time and coverage. And coverage here, Mike I define as both test case coverage as well as device coverage where I might be exercising the full test suite but only doing it on two devices. Although, I know, in truth, that my user population is to cover 50 percent of my users, I’ll need perhaps 30 some odd devices to test. But I’m consciously making a risk based decision to only test on two or perhaps even four devices. And so coverage is really a velocity blocker in the sense that it’s challenging that time and forcing it into a risk based approach.

The third area that we find challenging and uniquely related to mobile is the test lab itself. And really is the test lab in a test ready, always on capability? As we accelerate the process and shift testing left, that means we’re simply doing a rising number of test iterations themselves, which places a greater need for lab availability. Some of the reports that are out there indicate that lab availability is one of the common delay factors within the quality process. And then, the fourth area has to do with multiple teams. What we’re seeing in the customers that we work with, which are frequently large enterprises that development is not one group of eight people in one location.

Rather, it’s done in a highly distributed fashion where they might have several dev centers and, perhaps, a quality COE located offshore. But the key aspect there is to ensure that whatever potential quality impediments are found, whatever defects are found, they need to be efficiently communicated in a way that supports reproducibility and, ultimately, resolution. And so collaboration and providing the right artifacts or, for some in the audience who might know for their developers who label certain defects as simply non reproducible, yes, you might have found something, but, no, I can’t reproduce it.

And, therefore, I’m going to move on. That area is a key velocity blocker because it sustains a growing quality debt through the process. And then, the last element that slows velocity, in particular, is slow feedback. As you mentioned at the outset with shift left, and I think shifting testing or shifting quality left is really a critical idea, Mike, and I’m glad you started there. It has to do with also bringing quality into the main process, into the main development cycle. So really, anything that is done out of cycle, out of that primary development cycle, by definition, delivers feedback slower than had it been included.

And so a good example is, perhaps, after a full test suite is accomplished through sanity testing and a regression suite and even, perhaps, compatibility that that’s when performance testing starts. Well, that’s really late in the cycle. And really, it potentially creates an opportunity to challenge a go/no go decision if there’s unknown quality and questions as it relates to performance because many organizations recognize that users – a very common complaint is simply this app is not performing as fast as I expected it to. So collectively, these areas represent velocity blockers. And so what I want to move, if I can get my computer to participate, is I want to talk about how to unleash velocity.

And really, it starts with automating the process and automating the testing. Many of our organizations are very familiar with the automation test pyramid, an idea that was introduced a number of years ago but really are still in the place where they’re trying to put all of the pieces together. And what that really means is being able to start the process of automating the testing program at the inception. And that’s really when the build occurs being able to find out, for mobile apps in particular, whether that app will run on the device, whether a basic set of smoke tests can be done to execute a sanity for the build.

And then, to discretely exercise both back end testing as well as UI testing. And part of the challenge that we find with many organizations is while they pursue automation, they struggle in areas. And they struggle in terms of developing automation that works. So it’s really critical from unleashing velocity to have a code strategy with your testing that is comparable to your code strategy that you’re going to deliver in production. And that means being able to exercise common techniques such as class libraries or component libraries, or being able to reject the device coverage tradeoff by executing in parallel, and also, being able to control not only the application under test but the device under test.

When we put all of these pieces together, that’s when we see that organizations are able to successfully move from perhaps a 10 percent level of automation to 70 or 80 percent, and, really, being able to put it together with real devices in a test read configuration. And those have a degree of attributes with being able to leverage real devices that are in the markets that our end users are at. And by putting these pieces together and now, thinking about shifting left, Mike, where it comes into the table is how do I do it faster? And how do I do it earlier? And that really starts at the commit level when I’m committing the code.

And many are beginning to adopt continuous integration and then, recognizing the need to accelerate that out of cycle testing that I talked about earlier and bringing it into the cycle. By being able to embed, for example, basic performance data within every test case that’s executed, by embedding timers, or by simply working to vary the test conditions, the networking conditions, to mimic real user behaviors.

When we put all of these pieces together, this is when organizations have successfully constructed the recipe to unleash the velocity by automating both the process with continuous integration and automating the testing, which is both really functional and nonfunctional testing and building that off of a foundation of an always ready lab. And then, lastly, this is just a short overview from the Perfecto Mobile perspective of the components of our solution that Info Stretch builds upon to deliver their test automation, their offering solution, as well as executing BDD.

And it’s when these combine together, this is how organizations are able to accelerate their velocity to deliver continuous quality at an enterprise scale. And with that, that’s what I wanted to share those ideas as we transition to Manish who is going to talk a little further with regards to the integration that they’re bringing to the market leveraging our continuous quality lab.

Manish Mathuria:

Thank you very much, Carlo. And thanks, Mike, for doing a great introduction to the topic today that we are talking about. So let me start from where Carlo left off here. Let me first share my screen. All right. So Carlo, I want to talk about the point you talked about where you’re seeing people building and releasing on a weekly or a monthly basis. Actually, not so much in the mobile world, but in the SAS world, we are seeing people releasing on a daily basis. So the emphasis that is put on automation or the ask that is put on his, typically, is that I want to release multiple times a day.

And, therefore, you better get the entire test suite with be done in an hour or so so that I can and release to production. That’s where we are coming to. Of course, with respect to the mobile apps, it is not so much that easy because app stores, etc., are involved in the process. So what that puts pressure on is we have to start thinking about security, about performance, about automation from Day 1 when before we write the first line of code. And that’s like an extreme example of shifting left. So anyway, what I’ll be talking about more is what these challenges bring. I’ll make things a little bit more practical.

I’ll take these concepts home and talk about what challenges agility brings and this particular process brings, and what are some of the approaches to incorporate the feedback cycle in your day to day software development lifecycle and what concepts would help it. I’ll also be talking or showing you certain screenshots, which is a technical solution to automation, which is a joint solution between Perfecto and Info Stretch and giving you a sneak preview of what this solution looks like via a few screenshots and, like Leila said, there will be a follow up presentation or a follow up webinar in which we will do a detailed demo.

So let’s jump straight into it. As we all know, when we start work with agility, there are different parties to the campaign. There’s a product owner. There’s a tester. There’s a developer. And there’s an automation team, and of course, there are multiple people who participate in this. And the challenges that come from this are that working in one team as a product owner produces you a story, then, testers and developers write code. And the automation teams write automation against it.

As time progresses, and as requirements morph through the release cycles, it is a difficult catch up game because, in a two week sprint, you are required not only to write all of the new functional test cases, keep your regression automation up to date, manage all of the code related changes, and do it every sprint cycle, which could be a week or two week cycle. So pretty soon, what we start to observe is that your code and your automation code starts to diverge.

And because your automation is returning a language which is not directly proportional or not directly related to your test cases that are returned primarily in English or in any test language and in a natural language, you often cannot tell how related the automated code is to the test cases. So this starts to diverge, right? Behavior driven development is one approach that is a very strong solution to this particular problem. And what it states is something very simple. What we mean by behavior driven development or test driven development or there are several terms or specifications by example is that you will keep the user stories, acceptance criteria, and the test scenarios closely knit together.

So you will write user stories. You will write acceptance criteria. You will convert these acceptance criteria into automatable scenario. And it’s precisely these automation scenarios that you write, which are returned in English that will get automated. So as a result, what you have is you have this continuously living documentation system that is designed not to produce any divergence. Hence, the feedback loop that should happen from keeping the automation code completely in line with the test specs, and the test specs completely in line with the user stories is defined by design.

So let’s look at what this looks like in real life. So this is a user story. And I won’t go into the details of the user story. You can read it. But bottom line is all it says is that if a room guest returned or cancelled, it should go back to the inventory. Now, typically, this user story would be converted to test cases, which are returned in Excel. And right there, you are introducing a cause for divergence. From that point on that Excel test case will probably be converted into Java code or a QTP code or something like that. Again, that’s, yet, another point where you are introducing divergence.

So as the user story changes, the test specs change, and the code changes. Pretty soon, you don’t know really what you’re testing or what your automating. And when you certify a build through continuous integration, you exactly don’t know what you’re testing. So what BDD said is that you would write a scenario that looks very much like what you have on the screen right now. And you will keep the scenario very close to the user story, perhaps in the same documentation system as or whatever you use for your agile management. And you will keep these test scenarios pretty close.

And furthermore, these test scenarios are exactly what gets automated. So here, on the screen, I have an example where each statement of the scenario has a driver to implement that statement. And that scenario is exactly getting automated by virtue of writing drivers for each of the statements. Now, this brings yet another benefit to automation, which is not easily observable here. But as Carlo mentioned, just like software development, reuse componentization, etc., are right principles for coding automation. So in other words, you don’t want to create automation that, for each test case, is completely disjoined from each other.

You want to write several reusable components that get used over and over again. Therefore, when the code changes, you are changing it in one place and not changing it in 1,000 places. If you think about it, what BDD does also is that it automatically introduces the use because, once you write a test spec, or you automate a particular step in this scenario, by definition, it is reused. So when I create a library of test steps that library of 100 test steps can be reused for thousands of BDD scenarios. Therefore, if my code changes or my requirements change, I have to change a minimal number of test steps.

So let’s look at what is the recipe for continuous quality. So I will try to summarize a few of the concepts we discussed today and then, jump straight into showing you some of the screenshots of QAS and the solution that we have on the table here. First, the one thing we talked about is continuous feedback. Continuous feedback is from two angles. One is from the customers. So there are techniques that are a method that a team can deploy to automate the feedback from the customers by introducing several technologies in the app, which is outside of the scope of our current.

However, internally, we are within your SCRUM team, living documentation and specification by example, which are synonymous for BDD are, by definition, introducing continuous feedback in your software development lifecycle. Second, continuous engagement means that the SCRUM team is always on one page. It is not that the product owner is saying one thing, the developer understands something else, and the tester writes specs for something else, and the automation is trying to play a catch up game with all of these things. What it means is that they system allows all of these parties to speak one language.

And when they say that they have a particular user story that is being tested in a certain way, it is always true. So creating a system of engagement and a process of engagement that actually allows all of these parties to talk together is a strong tenet of continuous quality. The next thing is continuous integration. Carlo mentioned and signified the importance of continuous integration. But we say it this way. If you are not automating the process of automation, your automation is useless. It is mindless to create automated test cases if you are getting to execute them by hand.

The requirement that business puts on you to release your code multiple times a day or make it multiple times a week or multiple times a month is not going to be achieved if you are going to execute your automation by hand. So continuous integration is as important or, perhaps, even more important than actually automating your test cases in the first place. So you automated test cases, as well as your build should be integrated in one tight system where, whenever a build happens, and that could happen multiple times a day, your test cases run right alongside.

And finally, a need for having a continuous environment, so you can catch the drift here by using continuous quality or requiring continuous quality, these several continuous elements have to fall in place. And continuous environment is always on demand, availability of real devices that you can tap on whenever required, whenever continuous integration runs, such that these test cases can be actually executed as you need them. So let’s look at the solution elements. There are two critical components that were going to be talking about. QAS is QMetry Automation Studio.

And Perfecto Mobile cloud Selenium driver is a critical component that integrates the two. That’s the glue between the mobile automation and your BDD test cases. So let’s look at how. QMetry Automation Studio is, basically, an Eclipse based tool that allows that essentially is an authoring platform for creating BDD based test cases that can be automated using multiple drivers. It could be Perfecto drivers. Or it could be any other drivers that you automate your test cases against. Being an authoring tool, we have all of the best practices and principles built into the tool such that you can use through data driven testing, consume your data from any kid of CSV file.

It is a very extensive reporting element built into it. It promotes and encourages and, sometimes, enforces usage of the right kind of design patterns, spacing patterns to be used from the tool itself. And it makes development of BDD extremely easy by creating a very user interface driven method to drag and drop these test steps onto a BDD. I’ll show you some of the screen shots next. So this is what the solution components look like together. So what’s in yellow here is the QAS platform, which a test authoring layer, which allows you to create BDD or Java driven test cases.

And then, there is a fundamental player underneath, which is a repository of your objects, repository of your test steps, of other object libraries that you create to automate and make your automation reusable. And underneath that, there is a set of drivers, and we’ll talk about Perfecto Selenium Driver today, which allows you to actually execute these tests at one time on Perfecto cloud devices via an HTTP rest protocol that is executed every time a test case is executed. So here are some of the screenshots. What this screenshot talks about is the basic structure of QAS as a tool.

What you see to the left is a structure and is a set of format that your QAS project forms, which has clearly delineated areas for storing your resources, for storing your scenarios, and for storing your source code and so on. By being an Eclipse tool and Perfecto Selenium driver has an Eclipse plug-in, and, therefore, it fits nicely together. So what you see to the right here is the Perfecto plug-in in the mobile cloud perspective of Eclipse where we can open a device. We can involve the object and all other mechanisms through which we can create the object repository.

We can get to a particular screen on the app and point and click at a specific object. And it helps us pick up the object. And it suggests what the object locator should be with respect to which we can actually execute that as the case. And it enables you to build the object repository that you see in the middle here, which is a very structured representation of your object by giving it a particular key that gets used in the code, the locator, and a user defined description. By the way, all of these things are, so you can edit it the way you want. And the description is what shows up in the report.

So this is how you pretty much build the object repository that gets used in the code. The next screen shows you the BDD perspective or the QAS perspective, as we call it. And this is a user driven or user defined, user interface driven method of creating your BDD. So it helps you create BDD’s in two main ways. One is you can type the BDD steps or scenario steps rather, and it has look ahead. And it completes your typing by picking up the existing steps that are already defined in your framework. Or you can drag and drop these steps from what you see as your step repository onto your scenario.

And, therefore, you can build the BDD scenarios in a very interactive mechanism. Your scenarios can be completely data driven. So you can define the data in an XML way, in a CSV way, again, in a very interactive manner. And you can also define several user defined attributes or meta tags against these scenarios so that you can search and the scenarios, again, in a very user defined mechanism. So for example, if you wanted to create a subset of your test suite, you could do it entirely on the basis of some of these attributes. For example, run the P1 test cases, which meet the small criteria.

The third screenshot I have is that of a report that comes out of QAS. QAS automatically captures your screens from the device that is actually existing. That device exists in the cloud, Perfecto cloud. And we automatically show all the assertions that you’re making. Basically, we show all of the steps that comprise the scenario. And underneath the steps, in the step code, if there are any assertions being made, the report automatically picks up on them and shows you which assertion is passing, which assertion is failing. It also captures screenshots for failed assertions or even past assertions that you configured.

So it does several other things like it shows you the trends of past failed test cases, and it allows you to see the environment at length, etc. like I said, I would be actually giving a very detailed demonstration overview of this solution in a follow up webinar, so stay tuned for that. With that, I want to close the slide deck part of the webinar. And we are open for questions. And Carlo, myself, and Mike will be taking all of your questions.

Leila Modarres: Thank you very much, Manish, and thank you, everyone, for your participation. We hope that you found the presentation useful. If you have any questions, are contact information should be provided on the screen. I know a number of you folks may have posted some questions. And we tried to get back to everybody, but if we haven’t gotten back to you, we’ll try to reply offline. And also, last but not least, if you missed any part of this webinar, there will be a recording available on the Info Stretch website and across all of our social media sites. So tune in very soon. Thank you, everyone, and have a great day.