
AASHTO re:source Q & A Podcast
AASHTO re:source Q & A Podcast
Mid-Atlantic Quality Assurance Workshop: How to Leverage AASHTO re:source Programs to Monitor and Improve Quality
Dive into our latest episode as we explore the critical need for laboratories to evolve from mere compliance to a thriving culture of continual improvement. Brian Johnson shares valuable insights from the recent Mid-Atlantic Quality Assurance Workshop, highlighting a new strategic roadmap aimed at enhancing operational practices within laboratories.
Throughout the episode, we address the pivotal role that AASHTO plays in guiding laboratories towards adherence to rigorous standards, emphasizing that compliance is just the beginning. We underscore the importance of understanding the root causes of nonconformities and foster a proactive mentality that seeks systemic solutions over temporary fixes.
Listeners will appreciate the actionable examples provided, including how to utilize proficiency sample data for setting realistic quality objectives. With a focus on collaboration, Brian stresses the necessity for communication among all stakeholders involved, paving the way for a unified approach to quality management.
This episode speaks directly to laboratory managers and quality assurance professionals seeking to enhance their practices. Are you ready to elevate your lab’s standards? Tune in to learn how to integrate continuous improvement into your quality framework effectively. Don’t forget to subscribe and share; let’s foster a community committed to excellence in laboratory quality!
Have questions, comments, or want to be a guest on an upcoming episode? Email podcast@aashtoresource.org.
Related information on this and other episodes can be found at aashtoresource.org.
Welcome to AASHTO Resource Q&A. We're taking time to discuss construction materials, testing and inspection with people in the know. From exploring testing problems and solutions to laboratory best practices and quality management, we're covering topics important to you Q&A.
Speaker 2:I'm Brian Johnson. On today's episode we're going to do something a little different. I gave a presentation at the 57th Mid-Atlantic Quality Assurance Workshop, which was in Hanover, maryland, last week, and I talked about our strategic roadmap and kind of shifting from strictly conformance to working more on continual improvement with the audience, and I felt like it was a good one, a good message to get out to the podcast audience. So I asked them for permission to broadcast it and they said, yeah, go for it. So I am presenting my presentation to you that I gave at this workshop. Now, the Mid-Atlantic Quality Assurance Workshop. Like I said, this was the 57th, so they've been doing this a long time. The Mid-Atlantic states are Pennsylvania, new Jersey, delaware, maryland, west Virginia and Virginia in this group, so they rotate their meetings throughout that region and they have quite a bit of useful information for people. It appeared that there were a lot of people from the construction testing industry. There were DOT people, researchers, associations, quite a few people there. So it's a three-day conference or two-and-a-half-day conference and it's pretty well attended. If you're in the region especially, you might want to check it out. It's going to be hosted in Pennsylvania next year for the 58th in Hershey. So if you're interested in something like this, check it out for next year, and I believe it's usually around the same time, so mid to late February. All right, enjoy my presentation, okay.
Speaker 2:So a lot of people don't really understand who AASHTO is Like. They think about the staff that might come into the laboratory from AASHTO Resource as AASHTO. But AASHTO is really the state DOTs, right? So if you are a DOT, you know that You're the AASHTO member. You're the ones who set the standards, you're the ones who kind of give us direction on what to do and the way AASHTO is laid out. There's the board of directors, there's the Committee on Materials and Pavements and then the AASHTO Resource Administrative Task Group is a subset of the Committee on Materials and Pavements. So there's a chair and four regional vice chairs and those are the ones who make the decisions on accreditation.
Speaker 2:So, for example, we had a situation recently where a laboratory had an absolutely horrible assessment I don't even know how many nonconformities, but it was over 100 by a good number and in those cases what we do is we would say, okay, this laboratory shouldn't be accredited, they shouldn't be getting contracts, they shouldn't be doing anything really other than fixing what their problem is. So what we do is we take the report and we send it to the ATG, the administrative task group, and say we'd like to suspend this laboratory's accreditation and not give them a chance to respond because they don't really deserve it at this point. And then we get a decision. So in this case the decision was to suspend the accreditation and things worked the way they should have. And now action is being taken by the laboratory to get in a position to get the accreditation back, but they don't get to go through the normal process because they have forfeited the right to do that. So those are the kind of decisions that the ATG makes and that can happen. Accreditation can get suspended or revoked at any time. So cases of fraud can lead to immediate revocation and potentially refusal of service. So we can just say we're not going to do business with with you anymore. But that has to be approved by the ATG and then below that you've got the staff.
Speaker 2:So AASHTO and AASHTO resource okay, so we cover it. Aashto resource you know a lot of people when I get to speak at different places. We talk just about the accreditation program and how that works. But it actually resource. We provide proficiency samples and assessments that cover all those materials. And then there's another component. Obviously there are some pretty important materials not shown up there, like concrete masonry that is covered by CCRL. Now CCRL and AASHTO Resource share the same building where we work together very closely. The AASHTO accreditation program accredits based on CCRL assessment reports and proficiency sample reports, but they are owned and operated by ASTM. So ASTO and ASTM work very closely together in this regard, even though they don't always work as closely together on standards development.
Speaker 2:Now today, since this is the quality assurance workshop, I've geared this a little bit more towards just quality concepts than how to maintain your accreditation.
Speaker 2:So one thing we worked on last year was developing this strategic roadmap, and it's a subset of AASHTO's strategic plan, but it lays out a slightly different vision than what we've had in the past.
Speaker 2:It lays out a slightly different vision than what we've had in the past, and what we're trying to do is really get people to think more about quality and less about just resolving nonconformities and adhering to standards. So there's a lot of I think a lot of times people get lost in the minutiae and we get lost in the minutia as well. So what we'd like to do is kind of get people heading in the direction of improving quality overall instead of just being buried in details constantly. Not that we're going to completely ignore details, but we're going to focus more on quality improvements. And we also have a factor of this mission where we want more collaboration with key stakeholders than we've had in the past. So I talked a lot about what we do with the, with AASHTO members, but there are a lot of other people who use the accreditation program as well, like building departments and counties and cities and the FAA, the Corps of Engineers. There are a ton of entities that use the AASHTO accreditation program. We need to make sure we're delivering for them too.
Speaker 1:Okay, so part of this strategic roadmap.
Speaker 2:It has a bunch of objectives. I'm going to talk about one of them, and I alluded to this earlier, talking about continual improvement.
Speaker 1:And there are a few processes associated with this improvement.
Speaker 2:You know this continual improvement process and that's developing measurable action plans and objectives and shifting-related objectives that you've determined on your own, that you evaluate every year and have benchmarks and other goals that are outside of just your normal operation, other part of your normal operations. But you've established goals associated with quality. I don't know how loud them and that's okay, but let's talk about if you don't have them. So if you're developing measurable action plans and quality objectives, there's some things you have to think about. You have to ask yourself how do we know we're doing well? So you have to figure out what are you going to measure? Are there any key performance indicators we can use? If we have any, what are the baselines? Where are we starting? And then, do we need any resources to get started to collect this data and figure out how to get better? Well, lucky for you, if you're familiar with us, we do have some built into the program, so you can use the AASHTO resource and CCRL assessment reports as performance indicators and you can use the proficiency sample data as a performance indicator. The proficiency samples are a lot more straightforward and I'll get into that. The assessment reports are not so straightforward. It is anybody using Assessment reports as a performance indicator, like do you have goals? I don't want any more than this. This many non-conformities during assessment. Or I want to improve our amount of nonconformities from if you're that laboratory I mentioned before maybe from 160 to hopefully 80 or less. Way less would be great. Has anybody used that before? You have a little bit. Yeah, it's really tricky. I want to explain some of the reasons. I don't know that. I would want to have a conversation with you if you're using the amount of non-conformities, because there are some variabilities that are inherent with the assessment process and I'm sure those of you who have been through the assessment program a few times have seen it right. A lot of it is based on what that person saw. That's a few days that they were there in your laboratory. Equipment can break down. Weird things can happen. The assessors might have different experience than you expect. They might know a lot about aggregate and really not a whole lot about concrete, or a lot about asphalt and not a lot about emulsions, and then that can kind of shape how that report comes out and you have to be prepared for that. So I think there's a bit too much variability with the assessments to have some really realistic quality objectives tied to number of non-conformities. Now I'm just I'm kind of trying to be honest with you about our variability that we have, but don't discount your own variability as well. As a laboratory, there's that too, but you would know what that is better than I would.
Speaker 2:All right, so let's talk about the assessment reports. I'm going to talk about the process a little bit. So what people typically do after an assessment right, they get the assessment report, they address all the nonconformities and they just hope and pray that we accept their corrective actions. That's not really where we want to be right. What we want to do is we want to get the assessment report, we want to start thinking about those nonconformities more, analyze trends and I will get into that Address systemic issues. So this is that shift from just not just conformance to continual improvement. You have to think about those systemic issues that you're seeing. And then you do have to report your corrective actions, and this is another step. A lot of people don't do monitor effectiveness of your corrective actions, so it's not enough just to get it accepted by us. You have to think about what you've done. Are there ways you can make sure that you've kept up with it so that the next time you get an assessment or an internal audit, that that same issue doesn't come up again.
Speaker 2:Okay, so let's look at a couple examples. So these are all probably familiar nonconformities to you. So competency evaluations not performed by the deadline, equipment not standardized or calibrated by the deadline, internal audit not completed by the deadline, management review not completed by the deadline or not at all I think it's probably more common. So you see some trends there, right? So what we normally get in that first example is people will say okay, here's my copy files for my technicians, here's the equipment standardizations we didn't have, here's the internal audit, here's the management review they never think about okay, well, why can't we get anything done on time? What's going on? Do we not have enough staff? Yes, that's true. If you're a DOT, that's probably true. Right, you don't have the people you need all the time to get the work done. Do we not have a system in place to keep track of these deadlines? Do we have an outdated system that just isn't working for us and we just haven't been able to figure out a replacement yet? So there's a lot of things you need to think about with your processes instead of just continuing to put these band-aid fixes in place, because you're never going to continually improve until you address those trends.
Speaker 2:Monitoring is another really challenging thing. We struggle with this as well. So when we get audited and really our internal audits are the most painful. So one of the problems of being really good at auditing is you have really good internal auditors and they just destroy you. So when we go through an internal audit, we usually get some kind of nonconformities and a lot of really thorough follow through, um by our quality manager, and it does make us better and it's been really good, but, um, it definitely puts us in. It gives us perspective of our customers, which is also important. Um, but monitoring is hard.
Speaker 2:I just want to give you some tips for this. Set up appointments in whatever your calendar. We use Outlook, so I'd say the best thing. Internal audits people really struggle with getting those done on time. You really need two appointments for internal audits. You need the one to prepare for it and you need the one to conduct it, and you really need one to conduct it, and you really need one with your boss included to attend the meeting where you go over it, so they know what is going on. That gives you a chance to not just get what you hopefully get what you need. Let them know what's really going on in your lab. Tell them some successes, tell them some struggles you have. It'll help you get better. Conduct effective internal audits. Work on that. Think about your process for that. Try to make some improvements there. Improve management reviews. We've done podcast episodes, webinars, tons of different sessions that are technical exchange on management reviews, and people still don't really understand them. So I'd say, if you struggle, I'd say a lot of DOTs, understand these. If you are new to our program, you might not know what this term is, though, and I would encourage you to go to our website and try to find more information on management reviews, but basically, it kind of closes the loop and gets management more engaged with the quality systems and what you're going through as a laboratory Scheduled updates for your policies, procedures and forms.
Speaker 2:So this is another thing. A lot of times, people won't ever look back at what they have to see. If it's up to date, they're satisfied that the non-conformities are resolved or they didn't get a finding on the next audit, but they don't go back and look okay, are we still doing this? This procedure says that I have to fill out this form. We haven't used that form in 10 years. Why does it still say that? So go back and look at those and put them in your calendar. Review this every three years, but just put a date far out and make sure you get that reviewed and see if it's still relevant. Make sure staff knows where it is. Make sure people are trained on it. You'd be surprised how quickly things go out of date, even if you've got a. You know you've had a system in place for a while.
Speaker 2:Things change. Technology changes, people's understanding changes. People find shortcuts. Instead of saying that shortcut is wrong because it's not in the policy, change the policy. Put the shortcut in there. It works great. Don't be tied to these things that have been okay before. You want to be not just okay, you want to get better over time, okay.
Speaker 2:So I did mention this a little bit earlier establishing quality objectives based on an assessment report and I gave you a little bit of the pitfalls that could come with doing that. There are ways to do it. I would say look at improvements rather than a strict number. Years ago I have sat in close-out meetings with laboratories where the managers had a really unrealistic objective of zero non-conformities and when they didn't get that, they were really mad at their staff and that was not productive and it was not useful and it was not realistic and it made everybody just not care about anything. They were just mad at each other and it made it hard for them to get better. So think about reasonable quality objectives. If you ever want to talk about that with us, we're happy to have a meeting and look at what you're dealing with now and come up with some goals for next time.
Speaker 2:Make sure you use us and it's not just the DOTs, but especially the DOTs, since we really do work for you. Take advantage of us more. I think a lot of people don't pick up the phone or send an email and say, hey, what do you think about this? We're happy to help with that. We have over 2,000 labs in the program. We've seen it all and we can kind of give you some ideas about best practices. But you know that's not just for the DOTs, that's for everybody.
Speaker 2:So I would be leery about using the number of assessments or number of findings. But okay, let's get into proficient sample data. Way more objective, way more straightforward. This is an example of a proficiency sample report. This is on the hot mix territory. One thing I want to point out is one thing that makes our proficiency sample data a reliable objective source of data is the number of participants we have got. So if you look at that line number three, well, let's look at line number two. That's the maximum specific gravity, or RICE 886 data points are involved in the analysis.
Speaker 2:We use the average and standard deviations to determine the ratings. When it comes to statistics, the amount of numbers is really telling for how reliable that data is going to be. So if you've ever looked at a research report at TRB, for example, or some other research program, where they have six data points, I'd be very curious to see if that is reliable data and if whatever they figure out is their conclusion is going to actually be a conclusion you see in real life. But when you see 800, some data points, then that gets you a better idea. Like hey, this is probably really happening. So what we do is we provide you the average, the 1s, percent, 1s so you get your standard deviations and the ratings are issued based on your standard deviations, and I'll show you that in a second as well.
Speaker 2:So ratings of 5, people get confused about this too. If this is just basic, I'll move quickly. But the data basically falls out on the bell curve and when you're within one standard deviation you'll get a rating of 5. And if your result is above the average it's a positive number, if it's below the average it's a negative number. But a 5 and a negative 5 are still great. Then after that first standard deviation it goes to half standard deviations. So 4 and negative. 4 is 1 and a half Within 2 is 3 and negative 3. Once you get beyond that you're kind of in the risk area. So that is considered a low rating.
Speaker 2:As far as the AASHTO accreditation program is concerned and the proficiency sample program is concerned, what we do with that is we say you know, this is where you need to take corrective action. However, we don't suspend accreditation based on a 2 and a negative 2. We give laboratories an opportunity to correct the issue at that point. We give laboratories an opportunity to correct the issue at that point. But when you get beyond three standard deviations with a 1 and negative 1, 0, you're really far from the average and it's kind of hard to get into suspension because you have to do this on both samples of the pair in two consecutive rounds. So that's four times that you would get a zero, a one or negative one. So that shows us that you're not making improvements. You're not doing anything to improve.
Speaker 2:Some people, even after they get suspended, they will order a blind sample and they will fail again and say what are you doing? And they say well, we thought we'd get satisfactory ratings this time. Why? Why did you think you were going to get satisfactory ratings? You've done nothing to improve. So you really have to think about why am I getting these low ratings? We can't know. That's another thing. We can't know what happened at your laboratory for you to get the low ratings. We can give you an idea of things to think about if you're really not sure, because we have seen a lot of things, but only you can know what happened. Here's an example of a laboratory that got a negative one and a negative two. So those are not great ratings. They're none of you.
Speaker 2:By the way, I pulled somebody from far away so I made sure we didn't get anybody in this room in trouble, and so we provide that data. We also provide these unit diagrams so it'll show you where your laboratory falls on the scale. So that's all of the data. And then that red dot is that laboratory. So you see, they're kind of on the outside. That oval is the. If you're beyond there, you're in the zero range and that's not where you want to be. But you see a lot of data points beyond there. So there's some strange things going on out there.
Speaker 2:Another thing and I don't think people look at this enough is the performance charts. So these will tell you how you're doing over time. And what you really want to see is a trend line that kind of hangs around that red line, the zero line so that's the perfect average is that red line and the round data point is the first or the odd sample and the square one is the even sample. So we always send a sample pair so you can see how repeatable your results are when you see that spread between the circle and the square on each data point, and then you can see how you're doing from one year to the other by watching that trend line. So if this is an example of a really inconsistent testing lab, they're. They're very consistent in that they they're odd and even samples are pretty close together except for that one, but they're all over the place when it comes to the averages.
Speaker 2:So something's going on. There's some erratic behavior going on here. So it's either uh, it's probably a people problem. You know, somebody not following the standard. I usually, when it's equipment like, people usually want to blame the equipment, but when you see an equipment problem, you see a line that goes like drifting over time, slowly, either drifting down or drifting up. When you see this, it's usually somebody's mishandling samples, not following procedures, not training. Those are the kind of things you want to look at, but these are things that you can use to set performance goals for your laboratory in an objective way. So, like I said, this is a way more straightforward method of setting quality objectives. So we give you the ratings. You can use those as your key performance indicators. We already know what the baseline is. You don't have to guess. So you want to get a three or better, but then you want to think is that good enough for you?
Speaker 2:Let's say you're getting threes and you say, well, that's good, we're in compliance. But is that where you want to be as a laboratory? Do you want to be slightly better than average laboratory? Maybe you do. I'd say most people don't. Nobody gets out of bed in the morning and says I want to be really just slightly better than average today. Right, maybe you do. I hope not. It'd be hard to get out of bed like that.
Speaker 2:So let's talk about some ones you can actually follow up with on here. So think about these things. What should your goals be? When do you want to reach them? How do you want to get there? All right, and let's use that performance chart. So if I'm in that laboratory and I'm asking myself those questions, I say, okay, well, obviously I did really well the previous time. This time I did pretty badly, so I want to think about it before I get my next sample. Let's identify at least one improvement opportunity that I can make in my laboratory before those samples come in the door. So if I can set that goal and sit down with my team and say what can we do, guys like, what improvement can we make before we test this next one, I should definitely be auditing my technician on sample handling.
Speaker 2:These are things people also don't think about sometimes. Sample handling, not just testing, but what happens when that box comes in the door? Where does it go? How does it get logged? How do I make sure? Think about sometimes sample handling, not just testing, uh, but what happens when that box comes in the door? Where does it go? How does it get logged? How do I make sure that it gets tested on time? Uh, what do they do with that to get it out of the box? Where do they store it? Do they split it? What, like? What's the process that's not written down somewhere, uh, that they might handle or they might handle it with? And then they also probably have something to use for the calculations and the reporting. Let's go over all that stuff to make sure we're doing things properly.
Speaker 2:And if I'm in this laboratory, I'm probably not going to shoot for the moon here and I'll probably go for a four. I might say, listen, you know, like we're all over the place, we just need to stabilize. So maybe a three or four is okay this time, but let's try to, let's just do better, right, continual improvement and then after that. Well, actually this laboratory has done pretty well with repeatability so well. I won't go back all the way, but we give a repeatability rating too. It kind of tells you how far away the two samples were. So in this case the laboratory was at a 5RD. I would expect to continue that trend and then I would also want to review the ratings with my technicians. I might have not done that before, but this time I want to do it. I want to close the loop on this. So these are just some simple things that everybody can do to improve and figure out some objectives and follow through on them.
Speaker 2:So then, for monitoring, I want to think forward, like, let's think ahead to 2026. Okay, if we achieved our goal four, now it's time to go for the five. So let's see if we can improve that. And then, once we go over all these steps again, then we can be with the technician and or be with management, put it in our internal audit, put the management review, tell everybody what a great job we're doing, and then we feel good about that, right, uh? So when I I talked a look, just a very I touched on the roadmap that we came up with on this topic.
Speaker 2:There is a lot more in there. It's a one pager, but there's a lot of good stuff in there. So if you're curious about what's on there, it's on our website. But here's there's a q QR code here if anybody wants to know more. And then I also want to tell you that we have the Astro Resource Technical Exchange coming up here in very far from here, but it's in Bellevue, washington, march 17th to the 20th. The way this technical exchange has anybody been to the technical?
Speaker 2:exchange in here. Yeah, I know a couple people have been here.
Speaker 2:This is a great opportunity to meet with other people all around the country who work in our industry lab managers, lab technicians, equipment manufacturers, limbs I don't know what you call them like IT systems development, software engineers, people who work just in the materials testing industry. We usually get about 250 people showing up to this and there's a lot of time to communicate with them in between breaks at different events. In the evenings we have a lot of panel discussions, workshoppy kind of activities, and it's a lot of fun. It's probably actually not too dissimilar from what you guys do here, except that we usually get people from all over the place instead of it being regional. But I think it's worked out really well. We've done it quite a few years and we're expecting another good show here in Washington State.
Speaker 2:I also want to tell you about the podcast. I do host this podcast. We just wrapped up our fifth season. We've got over 100 episodes. We cover all kinds of different topics related to what we cover in a lot of things at Astro Resource and CCRL. If you're ever curious about looking for extra information, that's a good source too, and if you have questions, you can reach out to me at any time. But that's all I have. Does anybody have any questions? Yes, I was curious.
Speaker 3:You talked about the efficiency sample program and how the you know you've got a good sample size on the number of participants in the program. I was wondering do you guys ever look like year to year what the like on an absolute basis, what the standard deviation is doing and trending Like? Do you see any trends of labs nationally are getting tighter in certain tests or they're getting wider? Do you know what I mean?
Speaker 2:I know I know you got so many year to year. Yeah, I would say over time it has gotten tighter and a lot, a lot of tests. Actually, you can kind of see that if you look at the standards in the committee on materials and pavements, because a lot of times they'll use the proficiency sample data to establish the precision and bias statements. So you'll see an old standard might have a wider range of acceptance than a newer standard might have, and some of those are through an inter-laboratory study but some of them are just using proficiency samples. But it's tough. We do get that.
Speaker 2:A question related to that quite a bit why don't you just use the precision and bias statement instead of the average? It's like, well, if that was an old P&B statement based on data that was maybe unknown, like sometimes we don't know where the data came from. Do you really want to use that for your quality objectives or do you? You know that might be fine for, like, material acceptance, but when we're talking about making quality improvements, that's a different topic. But yeah, I think it is yes.
Speaker 4:I have a question about the proficiency sample. We received two samples for each type of test, so are those samples identical? Based on that, may you find a repeatability within the lab data. If yes, then how do we find the standard deviations for the lab repeatability data? Do we include all the kind of sample A and sample B information together and then come up with that?
Speaker 2:Yes, yeah, and actually the equation is on our website If you want to determine it, just for your own reassurance. Well, I guess, to answer your first question, they're not always the same, so sample A and sample B might be slightly different, but even though they are, it's like a comparison of your results to the standard deviation and then a comparison of those differences. It's how you figure out that interlaboratory repeatability. But the specific equation is on the website so you can figure out exactly what it is if you're curious. Equation is on the website so you can figure out exactly what it is if you're curious. So even though there's a difference, it's factored into the equation. So I guess the short answer to that.
Speaker 5:Yes. So on our last couple of assessments we have received observations on our customer satisfaction surveys. Based off the first observation, we did a bunch of things. Still didn't get a whole lot of responses back for customer satisfaction do you guys have anything on your website or anything to guidance on how to get better or get more people to respond to customer satisfaction survey.
Speaker 2:Are you okay? Let me ask you this when you're talking about the customer satisfaction survey, are you, can you tell me? Who are those going to?
Speaker 5:So at the first time, when you know we got an, an observation, we tried to improve it. So basically we did like an email blast to everybody that was in our e-cams anyway. So the producer and the producer that's getting there with an email address, they all got an email.
Speaker 5:I forget what the response rate was, but it was very low I believe we also put a link at the bottom of the signature in our emails that anybody can do it Again. The response rate is very high. So every time you guys go in and do an assessment, it always comes up as an area for improvement, but we're having a hard time figuring out how to assist them in getting back to some response.
Speaker 2:So you're trying to collect feedback to improve your processes, but you're soliciting the producers.
Speaker 5:Our customers, your customers, you're not getting customer feedback.
Speaker 2:So you're trying to collect it and make improvements on your own right, but you're not getting feedback. So does that indicate to you guys that they're satisfied?
Speaker 5:yeah, I mean, that's what you know. When they come in, they assess it and that's my response. It's like I get a lot of feedback so they get satisfied and just ignore it. You know even when I go into the garage to get my car repaired on ten minutes later I get a survey. You know, do I do it? You know, sometimes I do it sometimes I don't I think it's just a big issue in general that people don't want to do these surveys.
Speaker 2:Yeah, I think you're right. I think as a country, the whole population is suffering from fatigue from all these surveys. And make it register yourself, like the Radio Shack methodology of giving them all your personal information about batteries is somehow one, and now everybody's doing that and we just get exhausted by it. So you're probably right is your customers are probably fatigued. I know we we, we wear people out with surveys. We send so many surveys.
Speaker 2:What do you think about this, what do you think about that? And and uh, we do get some responses, actually we get a decent response rate, but it is kind of difficult. You don't want to overreact to some of it because sometimes people like you don't get those average people who are just satisfied they're not going to respond. You get the people who are super excited because it probably went better than it should have for them, and you get the people who are just you're really mad because something didn't go their way. So, like we, the way we deal with it, we, we get that data and we treat it I don't think we're doing it the right way, by the way Like we treat it like a complaint.
Speaker 2:But I'd say like it's not a complaint unless it's unsolicited Cause. A lot of times when we follow up on the people who send negative feedback, I'll say oh, you know, I looked into this and their response is usually oh, I didn't know you were going to do that. I didn't mean to cause any trouble. I wouldn't have said anything if you weren't pestering me to give this feedback and I'd say, okay, well, I feel good that we made an improvement either way. Give this feedback and say, okay, well, I feel good that we made an improvement either way. But I also feel like I wasted a lot of time worrying about this customer who wasn't actually upset. Uh, so I I'd say, like in the to answer your question about that, you're doing what you can do.
Speaker 2:Like I would say, try different methodologies, but if you're just not getting anything, you're you're doing your best, to your best to reach out to them and get feedback. So you might want to try some different things. You might want to do maybe less frequent cold calls or something, and say, hey, I just wanted to follow up with you on this after a project, or that's one thing that we found to be. A better way to get feedback is if we target an event found to be. A better way to get feedback is if we target an event. So after somebody just gets through an accreditation process we'll send them a solicitation for feedback. That has been way more successful than doing it based on a calendar. Like every january we send it out.
Speaker 2:We had horrible results with that. But if it's right after something they usually are more inclined to do it. But when I say more inclined, that we may be going from like two percent to 25, like 25 on a survey, I feel like it's pretty good. So if you're getting anywhere near that, I'd say that's all right. I don't know, does anybody else have any input? That's a that's kind of a tough one, kathy. Are you getting so Kathy's going to go up? Am I eating into your time? Okay, I think we have a few more minutes. Any other questions? Yes, you mentioned the lab.
Speaker 4:That had over 100 nonconformity tests. Yeah, Since you don't actually accredit like a blanket accreditation. How many different tests?
Speaker 5:did you? For how many different tests did you pull their accreditation?
Speaker 2:I'm guessing it was probably all of them, it was all of them. Yeah, how many was that it was? It was how many tests, I don't know, maybe like 60 tests, something like that. You know, the typical commercial testing lab is doing soil, aggregate, concreteasonry, asphalt mixtures, and it may be like six to ten tests in each scope and then some quality management system ones, some standards. But that particular laboratory it was, it was just like they weren't doing anything in between assessments.
Speaker 2:And when we see that, so like just for when we suspend the entire accreditation, it's usually because there's an underlying problem or something that impacts the conformance to AASHTO R18. So AASHTO R18 is kind of like the bedrock of the whole program and if they're out of conformance with that, it can affect the entire accreditation. They may be doing a great job with their concrete cylinders. That's usually what they're not doing a great job with. By the way, they could be doing a great job with that, but they'd still be suspended if they weren't keeping up with all their quality management system obligations. Thank you, you're welcome, all right. Well, thank you for your time and attention and for hanging in there, but yeah, thank you.
Speaker 1:Thank you, Thanks for listening to AASHTO Resource Q&A. If you'd like to be a guest or just submit a question, send us an email at podcast at aashtoresourceorg, or call Brian at 240-436-4820. For other news and related content, check out AASHTO Resources social media accounts or go to aashtoresourceorg.