The first is driven by the product, as in how likely is it that a change breaks some compatibility and how severe is it if you do. There are two major factors when deciding this. But word of warning, metrics gathering lands you in a load of pain because the data is often unmanageable and not terribly useful to the business. ![]() I’m keen to see what happens when we add metrics into the browser end of the product, you need anonymised metrics for these decisions. My experience is that our mobiles market balance does not look like the ones other people publish on device use of mobile for example. It’s really going to be a thing like points out, dependant on who your customer base is. If you have a Chinese market, they have their own browsers eating into the marketplace to look out for. ![]() I run any manual smoke tests using Opera and Vivaldi just to mix it all up. My problem is that half of the tests only work on Chrome due to W/D inconsistencies and just plain amount of work needed to maintain portability which I never foresaw as being quite so much work. Right now my CI/CD line runs Chrome for 99% of the tests, but can run Edge and Firefox. For example where I am Safari is big news, but only because a larger proportion (still less than 10%, but are very vocal unfortunately) of our customers use macs - although in reality a lot of them get forced to use Chrome on the Mac. Very very good question, so, On the Desktop context of the question…įirefox is really tiny these days, but need to also use some instrumentation and metrics though to back up such statement. We currently support firefox/edge/chrome, we really only test on chrome and during the regression tests we tend to pick a different browser once in a while. If you see a regular ticket has changes made to the UI, then you maybe also want to do a X-browser/device test. I could be possible if you are creating a bigger feature that you want to do a sweep. You should analyse your release process, features & tickets and see if it’s worth the ROI because how long does a full browser compatibility or device test last? I can imagine if you are releasing weekly or even faster, you don’t always want to do a full sweep. Is it important to check browser compatibility because you release often and a lot of X-browser bugs are showing up? Then hell yeah. I’ve had a project where we did do a browser compatibility check each release but that was because they only released twice a year. ![]() I believe the answer is dependent on the context of your application & SDLC. How often should QA be doing browser compatibility or device testing?
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |