Glass Canada

Features Codes and standards

NAFS will not replace building engineering.

December 16, 2014  By Al Jaugelis RDH Group

While the National Building Code is clear that testing and labelling
fenestration products to the North American Fenestration Standard
(NAFS-08) and the Canadian Supplement is mandatory for Part 9 buildings,
the application of these standards to commercial fenestration products,
in both Part 3 and Part 9 buildings, is less straightforward.

While the National Building Code is clear that testing and labelling fenestration products to the North American Fenestration Standard (NAFS-08) and the Canadian Supplement is mandatory for Part 9 buildings, the application of these standards to commercial fenestration products, in both Part 3 and Part 9 buildings, is less straightforward.

Applying the North American Fenestration Standard in the commercial world of custom fabrication and site-glazed products is going to be tricky. Labelling will probably not be an option in most situations.


The 2010 National Building Code and provincial codes based on it, such as the 2014 Ontario Building Code and the 2012 B.C. Building Code, all use very similar language with respect to the application of NAFS and the Canadian Supplement in both Part 9 and Part 5. The NBC is clear that our performance expectations for the wind and water resistance of fenestration products, called performance grades in the code, must be determined on a building-by-building basis, taking into account each building’s location, terrain, and the height of the fenestration product. For products within the scope of NAFS, it directs us to use the Canadian Supplement to determine the appropriate design pressure, water penetration resistance, and snow load (for skylights), though it is entirely appropriate to use it for fenestration products outside the scope of NAFS as well.


For products within the scope of the NAFS standard, these building-specific performance expectations are defined with respect to a property called “performance grade.” While performance grade ratings are denominated in IP (inch-pound unit) design pressure increments of five pounds per square foot, ranging from PG15 to PG100, they also include successful testing to a range of specified tests in a prescribed order, starting with operating force (and force-to-latch for side-hinged door systems), air leakage, water penetration resistance, uniform load tests at both design pressure and structural test pressure (1.5 times design pressure), as well as forced-entry resistance. Successful testing to all these requirements is required to achieve a performance grade.

But NAFS testing does not end with the achievement of a performance grade. In addition to meeting the minimum test specimen size and performance grade requirements to qualify for one of the four performance classes (R, LC, CW and AW), there are additional auxiliary tests that evaluate product durability features.

While products can achieve a performance grade by testing to the minimum specified Canadian requirements in each of these test categories, manufacturers are well advised to test products to their maximum capabilities to ensure they have a larger market. After all, in Canada we have two levels of operable product air tightness, and depending on building location and height, products may require significantly higher water penetration resistance than the minimum test values required to achieve a performance grade. These higher-than-minimum air and water penetration resistance values are reported in the NAFS Secondary Designator.

From reading the code, one would never guess that “performance class” (which is different from performance grade) is the key concept at the heart of NAFS. The first of the general requirements in the standard is an introduction to the gateway performance requirements used to assign products to one of four performance classes, and these gateway requirements are summarized in Table 1 of NAFS-08. Eighty three pages later, the standard concludes with Table 27, also titled Gateway Performance Requirements, which expands Table 1 to seven pages of detailed testing requirements that exist for one purpose only: to classify each of the 30 product types in NAFS to one or more performance classes.

The code, however, says nothing at all about performance class, other than “the minimum level of performance required for windows, doors and skylights shall be that of the Performance Class R.” That is the lowest of the four classes.

It seems clear that while performance class distinguishes products on criteria that may be important to an architect or specifier, it is not a topic of interest to the code, which concerns itself primarily with performance grades.

NAFS in Part 9
In Part 9 we see a distinction between two types of fenestration products: “manufactured and preassembled” products whose performance is evaluated by testing to NAFS and the Canadian Supplement (Subsection 9.7.4), and “site-built” products whose performance is evaluated under Part 5 (Subsection 9.7.5).

The list of products the NAFS standard identifies as being outside its scope includes curtain wall and storefront, commercial entrance systems, revolving doors, commercial steel doors, and sloped glazing systems (other than unit skylights and roof windows). Window wall products, while not named in NAFS-08, are defined in NAFS-11, where they are implicitly treated as mulled windows, and as such are legitimately within the scope of NAFS.

The use of the term site-built to refer to products outside the scope of NAFS is all the more striking given that there is no other meaning defined for this term in the code. These commercial glazing products are not actually built on-site—the frames are typically fabricated in an indoor manufacturing environment. It is possible that here the code is referring to site-glazed products, as the “manufacturing” of a fenestration product is not considered complete until the glass is installed. The distinction between site-glazed and factory-glazed is an important one in the context of product labeling.

Subsection 9.7.5 makes it clear that under Part 9, the design and performance verification of commercial fenestration products that are outside the scope of NAFS is to be handled in the same way as in Part 3 buildings: with reference to Part 5 requirements for professional design and supervision.  

NAFS in Part 5
Fenestration performance in Part 5 is addressed in Subsection 5.10.2. titled Windows, Doors and Skylights. Article describes the structural, air leakage and water penetration design and performance requirements in this way: Structural Loads, Air Leakage and Water Penetration

1) Windows, doors, skylights and their components shall be designed and constructed in accordance with

a)  Article, Section 5.4. and Section 5.6., or

b) Article, where they are covered in the scope of the standards listed in Sentence

It is interesting that this article points to two ways of addressing air-water-structural performance: clause (1)(a), which refers to the specific Part 5 design and performance requirements, and clause (1)(b), which refers to NAFS and the Canadian Supplement but only for products within the scope of NAFS. The use of “or” at the end of clause (1)(a) suggests the NAFS compliance path in clause (1)(b) is an alternative to the Part 5 design parameters we used to design commercial glazing systems in previous editions of the code.

Designing fenestration systems to comply with the performance requirements in clause (1)(a) of article is the domain of professional architects and engineers. But NAFS testing provides considerably less assurance about the real world performance of installed products than one might expect, and to my mind it is not a substitute for Part 5 design. Here are four reasons why.

First, for vertical fenestration, NAFS testing evaluates only wind-load resistance. But a fenestration product designed to Part 5 must also consider guard loads and human impact loads: important code requirements that affect windows with sills below guard height in most buildings.

Second, NAFS performance ratings are intended to validate the performance of the product only and not the installation method. So the NAFS-tested performance grade may give you a pretty reliable measure of the air and water tightness of the product to the edges of its frame. While this is useful information, a Part 5 designer is also interested in the performance of the interface between the fenestration product and the wall. Additional testing at the jobsite will be required to verify what NAFS cannot.

Third, NAFS testing for wind-load resistance is based on a fallacy: that one can separate the structural performance of a product from how it is anchored to a particular substrate. NAFS explicitly states that testing for performance grade is a test of the product and not the installation method. The test specimen installation language in the standard is so permissive that a large proportion of manufacturers anchor test specimens in ways that maximize performance grade ratings but could not responsibly be replicated in most buildings. In my view, this makes NAFS-tested performance ratings an unreliable indicator of real world structural performance. Because a product’s ability to resist imposed loads is critically dependent on anchoring, and anchoring methods must address particular substrate assemblies as well as building movements due to wind and seismic events, understanding how a product is anchored is critically important to the structural designer. This information should be available in a NAFS test report, but in my experience test reports generally do not describe the test specimen installation in enough detail to determine their applicability to project conditions.

Fourth, there can be questions about the extent to which the NAFS-tested performance represents the work of the same party that will supply and install the product to a particular building. There are several things designers need to be aware of here. For one, there are certification programs that allow fenestration product manufacturers who have never tested the products they build to label those products with NAFS ratings based on testing performed by other parties such as the developer of the fenestration system. The rationale is that as long as the untested manufacturer uses the same components and follows the test documentation, the products should perform identically. This is, in my opinion, another fallacy. While this approach may be adequate to validate the performance of products such as fire doors, it is not robust enough to validate air or water performance which are affected by the most minute and subtle deviations in manufacturing processes and the tolerances of key components. Documentation is no substitute for testing of these properties.

The same concern applies to the advertised NAFS ratings from major fenestration system developers whose products come to market through fabricators and glazing contractors. If the party fabricating and glazing the product is not the party that tested the product, the work of the fabricator and glazier will need to be independently tested to learn if it has the performance attributes advertised by the system developer.

NAFS performance ratings are generally reported on product labels, and some architects and building officials expect to see NAFS labels on all products, including site-glazed products, on Part 9 buildings. Unfortunately, this is generally not possible—or meaningful—for site-glazed products (one possible exception being site-glazing by the manufacturer who tested the product and uses the same materials and methods as when factory glazing).

All NAFS certification programs for fenestration products require labeling to take place at the manufacturing facility. Site labeling is not permitted except in extenuating circumstances and under special dispensation. The act of applying a label constitutes a declaration by the manufacturer that the product rating is valid because the rating is based on the testing of a production line sample that is identical in every significant way to the labeled product. It presupposes that the manufacturer has control over all the components, fabrication, assembly and glazing operations, and can assure the purchaser of the labelled quality. This is only possible for factory-glazed products.

In the case of site-glazed products, they are not usually glazed by the same party who built and tested the product line. What testing exists was likely performed by the system developer some time ago. The fabricator may or may not have access to the detailed test reports or to the detailed fabrication instructions that would be required for the fabricator to attempt to replicate the product as it was tested. The glazier may or may not have access to the same information, but is normally not under the supervision of a party that tested or regularly manufactures and glazes that product. When there is no single party in control of production, testing and labeling, there is no party with the authority or credibility to apply a label.

NAFS and commercial fenestration
The code is clear: many commercial fenestration products are outside the scope of the NAFS standard. The code refers to these products as “site-built” in Part 9. Regardless of whether these products are installed in Part 3 or Part 9 buildings, the code expects them to be designed to Part 5 as we have always done, and as if NAFS didn’t exist.

The code is also clear that for commercial fenestration products within the scope of NAFS, NAFS testing is permitted to demonstrate compliance with the air-water-structural requirements of the code in both Part 3 and Part 9 buildings. While the code permits this, designers need to be aware that NAFS testing is not intended to validate the installation method, and the unfortunate reality is that the standard permits manufacturers to anchor NAFS test specimens in a manner that differs from their usual or published installation practices. It would not be prudent to assume that tested performance grade ratings are valid when the test specimen’s installation method differs significantly from the installation method to be used at a particular building.

It is also worth considering whether NAFS wind-load resistance testing addresses all the Part 5 structural design and performance requirements applicable to fenestration products on a given building. For many buildings, there will be good reasons to not rely on NAFS testing alone but to supplement it or even replace it entirely with Part 5 design.

NAFS testing is not without value, however. Conducted properly, with products installed and anchored as they would be installed in the field, it can validate the performance of the product and the installation method, even if that is not the stated intent of NAFS. When products are tested to their limits and beyond, manufacturers can learn a lot about a product’s failure modes, and improve designs to make them more robust.

With respect to NAFS labeling, it is not possible to label site-glazed products with NAFS performance ratings unless the labeller is the entity that controls the entire manufacturing process, including glazing, and can vouch that the tested performance applies to the labeled production-line specimens. Even in this situation, site labelling would not be permitted by a third-party certifier. While NAFS testing and labeling provides the only practical method to demonstrate compliance with code requirements under Part 9, it is less useful under Part 3. When labeling is applicable, as for factory glazed products, it is not always desirable. On large buildings it is sometimes preferable to accept the manufacturer’s reporting of applicable NAFS ratings on the project shop drawings. These will be retained long after product labels are gone.

To my mind the chief value of NAFS testing and labeling is to verify that products comply with the requirements of a performance class, a property critically important to NAFS, but of little interest to the code. The ability to compare products on the basis of Performance Class is the chief innovation in the NAFS standard, one that gives designers a new tool to both prequalify products and evaluate proposed substitutions and claims of equivalency. This is a topic for another article. 

About the author

Al Jaugelis is a fenestration specialist at the RDH Group in Vancouver. He writes about the NAFS standard at the NAFS in Canada blog (, on LinkedIn, and @Al_Jaugelis.

Print this page


Stories continue below