AASHTO re:source Q & A Podcast

Dealing with Repeat Low Ratings

August 03, 2020 AASHTO re:source Season 1 Episode 2
Show Notes Transcript

 AASHTO re:source Q&A Podcast Transcript

Season 1, Episode 2: Dealing with Repeat Low Ratings

Released: Date 

Hosts: Brian Jonson, AASHTO Accreditation Program Manager; Kim Swanson, Communications Manager, AASHTO re:source 

Guests: John Malusky, Proficiency Sample Program Manager; Tracy Barnhart, Quality Manager, AASHTO re:source

Note: Please reference AASHTO re:source and AASHTO Accreditation Program policies and procedures online for official guidance on this, and other topics. 

Transcription is auto-generated. 

[Theme music fades in.] 

00:00:02 ANNOUNCER: Welcome to AASHTO resource Q & A. We're taking time to discuss construction materials, testing, and inspection with people in the know from exploring testing problems and solutions to laboratory best practices and quality management, we're covering topics important to you. Now here our host, Brian Johnson.

00:00:21 BRIAN: Today on the podcast I have two guests. One is John Malusky. He's the manager of the proficiency sample program and the other is Tracy Barnhart, our Quality Manager here at AASHTO Resource. The reason why I have you guys here today is because we have released the results out for the fine aggregate proficiency samples and the soil classification and compaction proficiency samples and we are getting a lot of requests for blind samples which are used to resolve suspensions that certain laboratories get when they receive low ratings on both samples in the pair for two consecutive rounds. OK, so I'm just laying out. Here's the problem that we're going to solve today.

00:01:17 BRIAN: The issue I don't take issue with the fact that it happens because it's programmatic activity, right? It's something that happens all the time. It's a way for the laboratories to resolve issues that they have and it's a way for us to make sure that the laboratories that maintain accreditation are being held to a standard, which is to make continual improvements. And to not continually fail proficiency samples. Here's where the problem is I am seeing that it is the same laboratories over and over again that are ordering blind samples and failing. So Tracy, as the quality manager, the master of all of our corrective actions. Can you give people some advice on how they can take effective corrective actions so that they do not keep having to order blind samples to resolve suspensions.

00:02:21 TRACY: I think the key to that Brian, is to dig a little bit deeper into what the problem might be and try to really get to the root cause of the problem, I think a lot of times people are kind of quick on their collective actions and they say, oh, well, I reviewed the data and everything's fine. It maybe we got a bad sample, I think people are quick to just. Get it over with and not really dig deeper into what really might be going on.

00:02:49 BRIAN: That's right, that's. I'm going to stop you for a second because I want to explore that particular aspect of the process that is, I think, leading to the failure is the lack of effort in submitting corrective actions and reviewing what went wrong.  Because some of the emails I'm getting say I already looked into it and we're not doing anything wrong, so then I will say OK, well that I always think, OK, it's possible.  Let's see how they're doing.  So I pull up their performance chart and I see even when they're passing, they're above the mean. In the last three rounds they've gotten, zero goes on every sample, so it's essentially 6 samples in a row over three years.  There's to tell me that it's a bad sample or they're doing everything correctly. It's very difficult for me to believe, John.  What do you say to those who say that they've?  Got a bad sample.

00:03:58 JOHN: Well, Brian, you know I'm going to say that is it possible that it could happen? Absolutely. We're not 100% perfect. We do try our best. We put in as many protocols as we possibly can to ensure that we're sending quality samples.  You know what you're specifically talking about with things like fine.  We're good. You know, we really have the capability to, you know, set specific numbers or quantities for every single size in the sieve stack.  You know all the way from the half inch down to the percent passing the 200.  But we use. Reasonable precautions to make sure that that doesn't happen. With something like a fine aggregate sample, we sample in a bulk stockpile.

00:04:47 JOHN: However, we control things like moisture content, the capability of where we pull material from that pile, we check moisture content multiple times throughout the processing and packaging. Period to make sure that we're maintaining the same moisture content. So, we either don't drain down fines or it's too dry and you know, we lose fines in the process itself. We, you know, we're an ISO-accredited PT provider, meaning that we have to test for homogeneity and stability. So during our production process, we pull 10 random samples from each side of the sample. So you're on your even and our laboratory manager, you know, takes split samples from those 10. So, he actually runs 47 Policies on each one of those fine aggregate samples to check the graduation criteria. 

00:05:40 JOHN And we run an analysis of variance on it and determine how good the material is. I'm going to say almost 99% of the time the material falls well inside of the precision estimate values that are in ash 227 and C136. So we're pretty confident that the material is. Is better than what's in the standard now. It doesn't necessarily pass the ISO requirement every single time, but when you're talking about a Sid analysis per say. You know the statistics, kind of. You know, they pick out immediate Differences within material, but when you're talking about a sieve analysis, you know in the grand scheme of things is 2/10 of a percent passing out of a 500-gram sample. Really that different, you know, we always also check to make sure that if we do have any sample.

00:06:42 JOHN: That appears to be an issue if that sample would have caused any failure to a laboratory so.  Yeah, I guess I can kind of stop there and see if you have any questions or comments about what we do from that point. : But I can move in and I could go on, you know, basically for hours. About how we. Do this, it's you.

00:07:01 BRIAN: Yeah, yeah, I know.

00:07:02 JOHN: Know don't want to get.

00:07:04 BRIAN: And you're you've touched on a lot of things that we could explore but I know that one thing that you mentioned is. That they don't.  Well, you're comfortable saying that you're going to be well within the precision estimate limits on the test methods.

00:07:22 JOHN: Yeah. So, our outside.

00:07:23 BRIAN: That is one of the big arguments that people have sometimes is that we shouldn't use. The average. From our current round as the basis for establishing these suspensions, we should use the precision estimate. But as you know and I know many people working in comp or at ASTM know a lot of those precision estimates were actually developed around our proficiency samples.  From the past, and we presumably have made improvements since then to narrow down those precision estimates. How do we get what's that interaction like between our program and the standards? Developers to keep those numbers relevant.

00:08:18 JOHN: So we have. Had a lot of interaction. More so, yeah, I guess once say recently with making changes to some of those precision estimates. We haven't had any change to the aggregates in a little bit. It's probably something we do need to look into. But recently we have made changes to the performance. Created Binder Precision estimates to the soil. I think the Proctor methods we provided some data for. That so we do attempt to provide data to make those changes. We're very active in providing information to the different subcommittees in ASTM for continual revisions for their precision estimates. So we do a lot to make sure that we're contributing our data. Do that and adjust the pool accordingly. You know, but that's the one thing that.

00:09:15 JOHN: You can take a look at over the course of time. You can see that the programs, especially you know that that we've got out there for the CMT industry are making it better. You know the general variability that you see within those samples and within the data has gotten smaller over the course of time, which is exactly what we want. You want to see that continual improvement. The tricky part is, you know, once you get so good, you're bound by your equipment and. You know. The standard itself. So once you get to that level, it's very, very difficult and you know 7 ounces is one of those one of those things. I mean right now I'm pretty sure we have that nailed after you know I would, I would probably say close to a century.  I'm not exactly sure how long we've been doing SIV shakes, but I mean, we have probably 30 to 40 years of data. You know, dating all the way back to the 1960s and 70s. So I mean, we're looking at, what, 50 years of data, you know that we've been, we've been evaluating these methods and you know, I think there's only really so much you can do at a certain.

00:10:22 JOHN: Point and you're going to Max that you know the one thing I will say there that the precision estimates that are in the standard, they're very, very good and it's even a challenge for us sometimes to meet those criteria you know. But when you're looking at something like the reproducibility limit. Which is, you know, you're between lab situation. And that's how we evaluate our boxes. We basically lab A is going to get this box lab so on and so forth all the way through it. And when we? Evaluate that based off the reproducibility limit. We're typically 50% less than the than the precision estimate for that criteria. So you know, like I said, we were very confident that those materials that are going out there. Or the best they can be now. Like I said it it's possible where you know in the random scoop that happened, there was just a giant clump of minus 200. It does happen. I won't say it won't, but the probability of it happening to you, as you mentioned three years in a row to cause low ratings is pretty low.

00:11:30 BRIAN: Right. Because every yeah, every package is is split up randomly, you know. The ingredients are all separated. There's no way you could get a problem on every single sample, Kim.

00:11:47 KIM: I just wanted to say so.  If there is a problem, John, let's say something has happened where there. Is a problem with the. Samples or something that went out that shouldn't have gone out on the rare occasion that does happen. What do you how does that? How is that handled in the proficiency sample program and in the accreditation program? How do you guys handle that rare occasion that something may be wrong with the whole sample?

00:12:13 JOHN: So, so typically we catch that stuff before it even goes out the door. That's the main goal of the homogeneity testing. We want to make sure that we don't have a sample go and be sent to someone with, you know, in some sort of a poor condition. That's not. Erogenous, however, there are times where it does happen, and it's not apparent until we analyze the full round of data. When that happens, yeah, I analyze the data, typically one to two weeks prior to the release of the final report. And if I see anything that would cause a an immediate flag. Especially with the. The scatter plot. That's kind of the, you know, kind of the immediate telltale sign that something happened. I'll first contact Brian and then contact Bob Lutz, the the resource manager, and we'll generally have a discussion about what to do usually, what that, what that relates to is will immediately suppress ratings for any of those line items that would affect the laboratories accreditation.

00:13:16 JOHN: We typically have go back and forth through emails or you know, right, right now typically e-mail. Obviously, we're not having any kind of face-to-face discussion now other than over the web, but. You know, we usually do our best to ensure that any kind of issue doesn't affect any of the participants, especially when it comes to accreditation.

00:13:37 BRIAN: That's a great point, Kim. Thanks for asking that.  Because John's program is basically the stopgap measure for the accreditation program, not taking action when there's a low rating that is undeserved.  So if there is a problem with the sample. John will make a change so that no ratings are issued.  That there are no ratings issues and there cannot be a repeat low rating for that particular item.  So, then that would skip to the next round for our determination of whether or not that laboratory should be suspended because it takes, it takes 4 total sample failures. To get to a suspension because there are two samples in each round, and it takes consecutive rounds of failure to receive a suspension. So that's good clarification.  Now Tracy, we have neglected you for a while.  So, I want to come back to you.

00:14:38 BRIAN: About corrective action.  So let's talk about it.  We've talked about what doesn't work. Let's talk about what works. What if you're a laboratory in this situation, let's say you're a new laboratory.  You're in the program.  You didn't do so well the first round.  You thought you took corrective action.  Effectively, it turns out you didn't.  When you get the next round results, you failed that to.  What do you need to do now to make sure that this doesn't happen again?

00:15:04 TRACY: I think there are three things that laboratories should focus on, specific to their proficiency sample ratings when they receive low ratings. The first thing would be obviously the check the data and the calculations to make sure that no errors were made. Maybe there was a transcription error between what was on the data sheet and what was actually inputted into. The online system. Or maybe the calculation was incorrect. That's normally where people start. Then of course you want to check your equipment to make sure that the equipment is currently calibrated, standardized or checked that there are any problems with it, that the maintenance has been performed.  On that equipment. And then lastly, a lot of people I know with their corrective actions, they say, well, we retrained the technician, they didn't perform the test correctly. But are they?

00:15:55 TRACY: Really, going back and making sure that the test is being performed correctly. I think it's a good idea to have somebody watch the technician actually perform the test that they received the ratings. Or to make sure that the test is actually being done in accordance with the ASHTAL and ASTM test methods. And by doing that, I think you can turn things around. Corrective actions are reactive. Of course, you're reacting to a problem that occurred, but if you take a little bit of time and do a thorough review and try to get to the root cause of the problem, you're turning it into a proactive process. And that should hopefully prevent this from happening again. Because nobody likes to do corrective actions, let's be, let's be honest, I don't like doing them. I'm sure our customers do not like to do them, so why not take a little bit more time and turn it into A proactive process?

00:16:46 BRIAN: That's a great idea. And yes, I feel their pain as well because anytime we get. A negative customer response on an outcome. I also have to go through the corrective action process, which I always find useful and yet unenjoyable at the same time. Much like our customers. And I want to talk about one other thing. You mentioned there about how it really is useful to watch the person. We had an issue a while back where our laboratory had they had gotten caught with some falsified records. These were not their test results. These were equipment. Check records that are internal. And I reached out to the person and said, hey, you need to look into this. We expect the corrective action that's meaningful and this needs to be resolved. The first response I got back is.

00:17:43 BRIAN: I I checked with the person, they didn't do it.  They don't know what happened.  And I said OK well. Does the person know how to do that work? Then they documented.  Have you observed it?  And it took it took a bit of back and forth to get them to act.  We do observe the person to determine that, yes, there was a problem a lot of times people don't want to admit that they don't know how to. Do something so they'll look at other sources to see what numbers make sense, or they'll try to figure it out.  But they may not quite get it.  And really, the only way to know for sure is by objectively watching that person and seeing what they do. That's always I know it takes time, but it it can save you a lot of hassle and a lot of time in the long run if you do it upfront.  So thanks for that insight.

00:18:45 BRIAN: I think that would be really helpful for people who run into low ratings on proficiency samples.  Any other parting ideas from either of you?  On what laboratories can do when they receive repeat low ratings.

00:19:03 JOHN: The one thing that you touched on, Brian, and I'd like to reiterate is the performance charts.  You know, it's kind of difficult if you are a brand new laboratory, but if you've got years worth of data, those charts are an incredible tool for you to use.  Evaluate any kind of issue that took place.  I mean, you know, like I said, you can go back, I think their performance trucks go back ten years or 10 rounds of testing.  So you have a A, you know a great backlog of data where you can investigate what could have possibly happened.  You know, it's something that. That's it's a little bit difficult to interpret, but you know, when you start looking at this, the Civ analysis data and looking at those performance charts, you can almost kind of go back and see where the issue started, not necessarily with a sieve where a sieve size where you failed on, but you might see something with a sieve. Rise above or below that sieve where you saw an issue, or even with the washing portion of the sieve.

00:20:12 JOHN: This, you know, we see a lot of laboratories that overwash or under wash material and that translates through the rest of the stack, not necessarily not necessarily on the coarser fractions but you know you would see something like that possibly on the 100 or 200 with an over washing or under washing issue. So that's probably the one. The one final thing I would stress is to really dive into those performance charts and those are kind of the most important tool that we have. On the website.

00:20:46 TRACY: And for and, I think it's important to point out that sometimes you just don't know what happened when you're doing these corrective actions and trying to determine the root cause or root causes of an of an issue, especially on proficiency, sample low ratings.  And that can be very frustrating and.  We understand that.  But it's important to take the time and at least try to determine what the issue is.  But when you're getting into repeat low ratings, I feel. Like it should. Be a little bit easier to to identify what the issues are and and really try to correct those, but we we totally get it.  But sometimes you go through them all of those steps and you still you just have no idea. What happened?

00:21:27 BRIAN: Kim..

00:21:29 KIM: Yeah. I just wanted to point out too, as a part of this is. If they're getting. More ratings on our proficiency testing. That's an example of like, how is that impacting their customers and their business and the ratings that they're giving their customers, right? So it's not just here's what you. Need to do. To be accredited, it's here's. Of an example of what could be a problem that you need to kind of look into like. A larger scale. So it's not just for accreditation, it is, you know, how is this impacting your customers that you're are paying you for reliable data?

00:22:02 BRIAN: Great point, all great points. Yeah, proficiencies annual proficiency samples should not be the only time you're paying attention to how well you're doing at your laboratory. And yeah, I get the point that some of the numbers are pretty tight on some of the samples. Some people say that it's not indicative of real-world and just the expectations are lower. But what we're talking about is AASHTO accredited labs, and your expectations should be higher than the average laboratory. So I would challenge people to pay attention to what they're doing out there all the time, try to make continual improvements. You can reach out to us if you aren't sure where to look. And yes, Tracy, sometimes you will not know when you get a low rating why it happened, but if you if, but if you got zeroes on 4 samples in a row, it's time to take a serious look for sure because there is something going on there.

00:23:01 BRIAN: So thank you guys. I hope that the laboratories participants in our program get something out of this discussion.  I think it will help you if you really aren't sure where to look.  And yet again, we are peeling back the curtain to show more transparency and the kind of conversations we have in the office.  So that people understand our perspective as well. So thanks again for your time. This has been Q&A. I'm Brian Johnson.  And thanks to our guests John Malusky and Tracy Barnhart, and of course, our producer Kim Swanson.

[Theme music fades in.]   

00:23:37 ANNOUNCER: Thanks for listening to AASHTO re: source Q & A. If you'd like to be a guest or just submit a question, send us an email at podcast@aashtoresource.org or call Brian at 240-436-4820. For other news and related content, check out AASHTO re:source's Twitter feed or go to ashtoresource.org.