When all web browsers follow IT industry standards then what is the need of Compatibility testing?



  • Simple question: WHY we need compatibility testing?

    Mostly answer that because application behaves differently on different web browsers. That is what I’m asking that why application behaves differently on different web browsers when all web browsers follow industry standards?

    I investigate on this and I came to know that everyone web browser has its own way of reading .css files. But, this is kind of lay man answer.

    I would really appreciate if someone could help me understand the exact reason that WHY we require Compatibility testing?



  • The short answer is that each browser implements the industry standards based on the implementation team's understanding of those standards.

    There are several different base engines that are used by different browsers, including but not limited to WebKit, Gecko, Trident, and Blink. That accounts for the majority of differences in behavior between different browsers in my experience.

    In addition, each browser implements the engine in a different way, depending on the operating systems the browser supports, the range of devices the browser supports, the focus of the programming team, and many other factors.

    Then, programmers are human, and make mistakes. Each browser is going to have different quirks and bugs because of this, so it makes sense to test in different browsers to ensure your application isn't caught by one of the bugs (for example: Internet Explorer has known issues with memory leaks involving AJAX and Jquery garbage collection due to its JavaScript implementation being less forgiving than Firefox or Chrome. Each successive version of Internet Explorer has reduced these issues, but has not eliminated them).

    Since most of the major rendering engines are written in C++, they have to be recompiled for different operating systems - which introduces another layer of potential differences, because each compiler optimizes and generates binaries in a slightly different way - and that's assuming the authors of the original code did not include any operating-system-specific code (which is a rather large assumption to make in my experience). The end result is that the same version of the same browser will behave differently in different operating systems (Firefox in Windows as opposed to Firefox in Linux systems is a good example here).

    Effectively, the browser running your web application is at the top of a multi-layered technology stack, and that stack is different for each browser. Even when the browsers all implement the same standards (and until very recently this was not the case - Internet Explorer 10 was the first of the IE family to use the same standards that Firefox, Chrome, and Safari implement). They all have different levels of support of JavaScript, Jquery, HTML, CSS and so forth.

    One example that I know (because I've experienced it) is something as simple as when the OnChange() event for a dropdown fires: Firefox, Chrome, and Safari fire the event after the dropdown loses focus. Internet Explorer (all versions so far) fire the event each time the selected item changes. If your web application enables/disables or shows/hides fields based on the selection in a dropdown, exactly when those fields are enabled will vary depending on the browser - which in turn can cause other differences in behavior.


Log in to reply
 

Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2