Connector and Cable Assembly Supplier

To Fail, or Not to Fail, That is the Question

To Fail, or Not to Fail, That is the Question

It is probably not necessary to say that the opinions Max Peel and I express in our articles for are our personal opinions, but opinions based on our many years of experience in dealing with the design, materials, testing, and failure analysis of connectors. I chose to open this article with that statement because the following opinions are more “theoretical,” and less “experiential” than is the general case. The topic this time is the determination of a “failure” criterion. (There will be lots of quotation marks around “problematic” topics in this article.) I ended my last article on acceleration factors, with the following:

“When reliability assessment is the objective, the quality of the rationale (for defining acceleration factors) takes on additional importance and another factor comes into play. The failure criterion, generally referenced to a change in contact resistance, must be known. Discussion of this issue is controversial and will be discussed in my next article.”

 The allowed change in contact resistance is arguably the most important criterion in assessing the performance and reliability of a connector system. This article is intended to provide some insight into deriving that criterion for a connector application.

There are many ways a connector may fail, some “catastrophic” and some “systematic.” Catastrophic failures are a result of “defects,” and systematic failures a consequence of design/materials choices. Defects include poor solder joints, bent pins, and “inadequate” plating, among others. In other words, defects, in this context, are a result of manufacturing or assembly processes. Failures in this category are often experienced immediately, or rather abruptly, after short field life. Such failures, though important, are outside the scope of this discussion.

The focus here is on systematic, that is, design/material-related failures. Examples include inadequate contact force in the initial design; loss in contact force due to stress relaxation, as a result of improper spring material selection; loss in contact force due to housing creep, another improper material selection; inadequate plating thickness for the intended durability life or operating temperature; and variants or combinations of these and other design/material choices.

Catastrophic failures may manifest themselves in initial high resistances or abrupt dramatic increases in contact resistance. Systematic failures, in contrast, generally occur over time and are characterized by an increase in contact resistance, perhaps at an increasing rate, after a period of stable contact resistance performance; or through intermittent high-resistance events. So it is reasonable to expect that contact resistance measurements, and an “acceptable” change in contact resistance criterion, may provide a way to “assess” connector performance over time.

It’s a “reasonable” expectation, but one not necessarily easily realized. Two issues arise immediately: How should contact resistance be measured, and what is an “acceptable” change in contact resistance? Consider each in turn.

Contact Resistance Measurement
Previous articles in this series, “Measure Twice, Test Once” and “Contact Resistance: Key to Success,” discussed details of contact resistance measurement practices. Connector qualification testing protocols generally require low-level circuit resistance (LLCR) measurements where the open circuit measurement voltage is held to 20 millivolts. This voltage level, sometimes referred to as “dry circuit,” ensures that the applied voltage is insufficient to disrupt any films, due to various degradation processes that may exist on the contact surfaces. Measurements are made before and after a prescribed series of conditioning and exposure steps, as described in previous articles, “The How and Why of Connector Testing” and “Devil or Angel: Connector Testing.” This protocol, however, does not detect resistance intermittents that were identified above as an indicator of contact resistance degradation. A degrading contact interface will typically show an increase in contact resistance with time. But, and more importantly for our purposes, it will also show increasing frequency of intermittents and an increase in the magnitude of the change in resistance at each event as a function of time. Conventional contact resistance measurements will only detect the overall gradual increase in contact resistance. To detect the intermittent contact resistance pattern requires a more sophisticated measurement protocol. Continuous measurement at a reduced sampling time may be required to capture the intermittent nature of the changing contact resistance.

Acceptable Change in Contact Resistance
The increase in contact resistance that can be tolerated by an electrical or electronic system is, of course, application dependent. In a low current DC, or low-frequency application, the general gradual increase in contact resistance cited may be the appropriate criterion. In these applications, a resistance change of several, or perhaps several tens or hundreds of milliohms in contact resistance, may be acceptable. The “upper limit” of acceptable change in resistance may well be determined by the fact that the general shape of the contact resistance with time curve begins linearly and transitions to an exponential rate as the contact interface degrades. This pattern is reasonable when the asperity model of the contact interface (see Connector Degradation Mechanisms) is considered. Recall that the contact interface resistance is proportional to the inverse of the contact area, as contact area decreases the contact resistance increases. Initially the contact area decreases slowly in proportion to the initial contact area, but as the contact area decreases the rate of the proportional decrease in area increases. So, one limitation to an acceptable change in contact resistance would be to remain below the “knee” in the contact resistance versus time curve. This consideration may limit the acceptable change in contact resistance to the tens of milliohms range.

If, however, the application is a power application, a lower criterion may be applicable to avoid the effects of Joule, I2R, heating. It is important to note that a power application is not necessarily a high-current application if the physical size of the contact is small. Small contacts will have both a higher bulk resistance and, more importantly, a higher contact interface resistance because the contact area will generally be smaller. So for a power contact, a change in contact resistance criterion may range from a fraction of a milliohm, to several milliohms, depending on the application current.

High-frequency digital data applications are a different story. In this case it is the intermittents that may dominate the acceptable change in resistance criterion. As mentioned, the magnitude of intermittent resistance fluctuations increases as the contact interface degrades. And, arguably more importantly, the duration of the intermittent events tends to increase. The combination of these two effects will limit the acceptable resistance level to ensure that the “knee” in the contact resistance versus time curve, as it affects the intermittent event characteristics, is avoided.

The mechanisms of contact interface degradation leading to increased contact resistance are reasonably well known in principle, but complex in practical interpretation. The purpose of this article is simply to highlight some of the major factors that must be considered in attempting to derive an acceptable change in contact resistance for a given application arena, a critical parameter in assessing the performance and reliability of a connector through a testing protocol.

Print Friendly, PDF & Email
Dr. Bob Mroczkowski

Comments are closed.