AASHTO re:source Q & A Podcast

Can AI and Automation Reinvent Testing?

AASHTO resource Season 6 Episode 2

The digital revolution has reached the world of construction materials testing, and it's happening faster than many of us realize. In this eye-opening conversation with Mike Copeland, Quality Program Manager at the Idaho Department of Transportation, we explore the remarkable ways artificial intelligence is transforming how state DOTs handle testing data, quality assurance, and technical decision-making.

Mike shares his journey from struggling with data trapped in PDFs to developing sophisticated AI tools that now save his agency countless hours of manual work. We witness firsthand demonstrations of AI applications that extract testing data in seconds instead of hours, plot complex gyratory compactor data with simple drag-and-drop functionality, and even predict material properties with surprising accuracy. But this isn't just about efficiency—it's about reimagining what's possible.

Perhaps most valuable is our frank discussion about the double-edged nature of these powerful technologies. While AI offers unprecedented capabilities to streamline workflows and enhance decision-making, it also creates new vulnerabilities in our quality assurance systems. Mike explains how traditional approaches to sample custody and verification testing may need fundamental reconsideration as we enter an era where data itself requires security and verification.

Throughout our conversation, practical examples bring these concepts to life: an AI chatbot that instantly answers technical questions about specifications while identifying conflicts between manuals; tools that transform handwritten test sheets into structured data without error-prone manual entry; and exploratory models that challenge our assumptions about which physical tests are truly necessary.

Whether you're already experimenting with AI or just beginning to consider its implications for materials testing, this episode provides both inspiration and caution from someone at the leading edge of this technological transformation. Join us to discover how these tools might reshape your own testing program while maintaining the integrity that ensures public safety in our infrastructure.

Send us a text

Have questions, comments, or want to be a guest on an upcoming episode? Email podcast@aashtoresource.org.

Related information on this and other episodes can be found at aashtoresource.org.

Kim Swanson:

Welcome to AASHTO Resource Q&A. We're taking time to discuss construction materials, testing and inspection with people in the know. From exploring testing problems and solutions to laboratory best practices and quality management, we're covering topics important to you.

Brian Johnson:

Welcome to AASHTO Resource Q&A. I'm Brian Johnson.

Kim Swanson:

And I'm Kim Swanson, and today we have Mike Copeland from the Idaho DOT with us. Welcome, mike Hi.

Mike Copeland:

Thanks for having me. This is going to be fun.

Brian Johnson:

The topic that we're talking about today is the use of AI as it pertains to the industry that we're in, which is construction materials testing. Mike has gotten involved with AI in his role, so I'm not going to define Mike by his position title, because that is a very unusual thing for somebody who is in construction materials to also be involved with. So, mike, can you tell us what your title is and what kind of work you do at the Idaho DOT?

Mike Copeland:

Yeah, so I'm the Quality Program Manager, construction Materials Group. Out of our headquarters office I deal with anything quality assurance related on the construction materials side of things, and then I dabble with ai a little bit um and just try to apply it to any anything quality assurance or asphalt pavements related how did you start getting involved with ai in this in this capacity?

Mike Copeland:

so, kind of like everyone else, chat gpt came out and uh, um, sounded really cool so I played around with it a little bit. But before that I had gotten a little bit more involved with data science type stuff and a lot of trying to analyze our pavement data and like our construction data and trying to get the data out of PDFs and things like that, because we're all you know, we have a paper-based system but we scan all of our documents, so we're an electronic system. We don't have a paper-based system, but we scan all of our documents, so we're an electronic system. We don't have a LIMS system or anything, so all of our data is kind of locked down in PDFs.

Mike Copeland:

Trying different techniques to get that data out instead of just sitting there and 10-keying in numbers yeah, it got me involved with like using R or Python a little bit, trying to find various methods. And then generative ai came out and so I tried chat gpt and found it really useful for like a lot of different things, and then kind of found that it was also really useful once the vision models came out, so the ones that are able to look at photos or videos or things like that it's able to pull. Pull information out of pdfs in structured ways like it's different than ocr, but it's kind of like OCR but keeps them balanced better than OCR. It's structuring your test reports into something you can do analysis on.

Kim Swanson:

So for those who may not be familiar with OCRs, because I totally know what that is what does that stand for?

Mike Copeland:

It's for optical character recognition.

Kim Swanson:

Oh yeah, of course, that's obviously what it's there for.

Mike Copeland:

That makes sense.

Brian Johnson:

Makes sense, typically what people do when they move from one technology to another. You know you go from paper to PDF, to then some sort of digital management, data management. I guess the normal thought process would be like okay, well, I have this thing that can read the pdfs, I'm going to take it, have it, do that and have it create it into like digital, like transform it into like, uh, data storage and and a database. But now with AI, you don't really have to do that anymore, but maybe you do it anyway just for the sake of having that data. But did you go through that or do you just use AI to read what's in the PDFs and not worry about that transition?

Mike Copeland:

Anymore, I just use AI to read the PDFs and if I have sometimes, you know, if I want to analyze multiple years of test data, I'll cycle through PDFs.

Mike Copeland:

So it does one at a time, kind of like a big batch of PDFs and then put it into like a data table or a CSV document or something, and then feed that into AI data table or a csv document or something, and then feed that into ai. Um, but if it's just like oh, like a week's worth of production data, like for asphalt pavement, so you got all your test reports, you got your um hot plant printouts, you know your 15 minute recordation, you got all your bill ladings, all your daily work reports, things like you can kind of just drag and drop it all into AI and just start asking questions and it can start, it can look into things. I mean AI. When I say AI, I mean like generative AI, large language models, chat, gpt, gemini type things. They'll sometimes lie to you or make mistakes, do what's called hallucinate, so you've got to fact check things. They'll sometimes lie to you or make mistakes, do what's called hallucinate, so you got to fact check things, but generally it does a pretty good job.

Brian Johnson:

So what would, in a situation like what you just described, what would be a typical question you might ask?

Mike Copeland:

Oh, it depends. If I want to extract out all of our let's keep talking asphalt pavement test data, all of our let's let's keep talking asphalt pavement test data, um, so we, we have source document requirements with like hand where everything has to be hand recorded onto, uh, onto a source document, um, as the original source of record, and, uh, I would ask you to extract out all that data and and perform the volumetric calculations or something like that, and it can definitely do that. Sometimes you have to test out your prompts. So your prompts are like the instructions that you're giving it. You have to test them out and try it multiple times. Sometimes, but especially in the last six months, because this keeps evolving you can give it some pretty basic instructions and pretty much any large language model will get it right off the first go for that kind of stuff.

Brian Johnson:

You're largely using this for extraction of data and information from test reports, or how else are you using this in testing?

Mike Copeland:

So I'm also like stress testing, or what I like to call like red teaming um our quality assurance specifications, looking for weaknesses, um, our quality assurance specifications, looking for weaknesses, um doing the same thing with like test methods or, like you know, using it to clarify things, exploring ideas like one off, proofs of proofs, of concepts, things like that too, all kinds of different stuff. I've been testing it out a little bit with like old dispute resolution claims and feeding it all in all the data to see like, okay, how would, how would ai respond. A lot of times it's pretty close to what like a grb board does, so it's pretty interesting.

Mike Copeland:

I think and I've talked to a lot of other DOTs about AI here in the last six months it sounds like it's a full range, like some people have never tried it. Some people have, you know, maybe tried it once when it was first released and thought, eh, this isn't anything special. And then there's a few that I think are probably using it all the time and maybe not talking about it. But I think us as an industry need to talk about it more because there's so much potential here with, like, how we can apply it to our everyday work. I mean, it saves so much time. I don't know that there's a whole lot of adoption.

Kim Swanson:

Where do you think that would be the easiest for other DOTs and other people in your position, Like what's the gateway? The gateway access to adopting this type of technology and testing and in quality and things like that?

Mike Copeland:

Yeah, that's funny that you asked that. I was actually just thinking about this yesterday because I've been trying to get some of my co-workers using it more to help them. I got to thinking about it. I mean, we've all worked in this industry for a while. We probably all are a little familiar with contract administration, especially us with DOTs. I was thinking about it yesterday, that kind of the way that you'd use LLM. Well, first off you just got to go use it and mess around with it. But you use it like you interact with it, like you would interact with a contractor and that whole trust book, verify thing. And then just think of your prompting as the specification. So you're writing the rules and then you're guiding the output. So you have to know the subject. At least that's what I find is I kind of have to know the subject to really successfully use AI, otherwise it's going to hallucinate and make things up and you aren't going to know. But if you know the subject it could really speed things up for you.

Brian Johnson:

Yeah, I know, one of the things that people have been kicking around lately is using AI as a I don't know if the word replacement is right but a tool for replacing the physical testing of materials or specimens. What would you say about that, mike?

Mike Copeland:

Uh.

Mike Copeland:

So the other day, just in an afternoon, I was messing around with AI trying to uh just end some territory data and I was trying to see if there's a way to take the take some of the subjectivity out of, out of uh GMB testing, the SSD part of GMB testing for asphalt pucks and um had AI help me write a Python script that then did a multilinear where I kind of selected the variables that I thought might have the impact and from all the other testing, so like t308, 166, gmm testing and grade H and things like that, and see if I could come close to predicting the bulk-specific gravity without doing bulk-specific gravity testing, and used AI to write a script to scrape all of our central lab data for the last five years and then parse out the same thing with AI, used it to pull out all of our gyratory data and just kind of had this big data set all from one laboratory multiple gyratories though and was able to build this model that was able to predict within the D2S precision.

Mike Copeland:

So that would have been within lab, but then I started comparing it to other labs and you could see the different gyratories. You know these aren't companion samples or anything like that. But you could see the different gyratories. You know how they flex differently or whatever, how they can pack differently. You'd see the differences between the models. But it was all within I don't remember the number off the top of my head but around 99% confidence that the measured GMB would be within the 0.007, if I remember right of the predicted GMB without doing GMB testing. So it seems pretty promising. So we started kind of using that as like a red flag diagnostic tool. But I want to look at it a little more and start comparing more data and maybe even consider like, do we need to do bulk testing?

Brian Johnson:

Do we need to do bulk testing? This is what I want to get to, because, as a DOT materials testing lab, you know you're doing QA on projects, but you know you've just uncovered how easy it could be for somebody, I guess on QC or QA, to just make up numbers. Right, absolutely, they just make up numbers that are plausible, which creates some risk because it's not actually tested and reflective of the material that the DOT is paying for. So with that knowledge, I mean it's good, you've done the digging, you know how it works now. It works now, and so you you're in a good position to be able to explain the concerns and the risks that there might be of somebody doing that. So what so with that? With that, what do you do? What do you do with this information you have now?

Mike Copeland:

I've been exploring risks, risks a lot, and I think, um, I'm of the opinion right now that our whole quality assurance system is out of date. It's got weaknesses. So it's all built kind of around a paper system that we've adapted to as we moved into the digital age. But as we moved into the digital age, now we have like CSV files or we have phylog graph files with an ASTM format or we got standards and even we got some encryption. But playing around with AI, I've found that pretty much any of those you can sidestep, you know, any security features and pretty much game any quality assurance practice that deals with data. Ai. It's not inherently good, it's not inherently bad, but if you give it technical instructions, give you the output. If you prompt AI right, you can get instructions. You can modify test results without changing any of the metadata, things like that. It's kind of scary. So I think we need to just rethink our whole quality assurance practice now that we're dealing with AI.

Brian Johnson:

Yeah, I kind of wonder about that with the proficiency samples as well, because I mean if somebody just imported all of the rounds that we have available and say you know what answer would give me a satisfactory rating for all of these?

Brian Johnson:

I am sure that there are some numbers that would probably work. So it's going to be harder for us to tell if people are doing that. But the only thing that I would caution people against is one of the things that we one of the reasons why we have the proficiency samples is it eliminates the need for more onsite assessments, because it's like a check in between. So if you want to have us go to annual on-site assessments, then go ahead and cheat, and that's probably where it's going to go is more on-site assessments, because then we can't rely upon the results from these checks done through the proficiency samples. So things are going to change as people are using these or misusing these tools. Other systems are going to have to adapt to account for that and if we're not getting the quality that we're looking for, things are going to change. So what are some things when you've been thinking about all this risk, mike? What are some things you're thinking Idaho DOT might have to do to account for some of this.

Mike Copeland:

As we move into this world of AI, I've been trying to come up with some of those kind of answers too, or in the past, with quality assurance, at least in Idaho and I think most other DOTs we always focus on uh, you know, chain of custody. Chain of custody is a big thing, like material sample security, things like that, and I think we need to consider data security too. Like what's do we have chain of custody on on from the source to to right now on this data and? And if we don't, then we shouldn't be trusting the data. You know, stealing information is like a really big risk, like in cybersecurity, but data polluting is another big risk. That was recently I guess not really that recently identified, but it's been identified as like being a big risk and a growing risk.

Mike Copeland:

And I think that holds true to quality assurance and in our industry as well is did the data change? Were those results adjusted or anything like that? And there's ways to identify it. I mean you start looking for patterns or you increase your independent verification. There's definitely things we can use AI to help prevent the potential fraud with AI, but it's changing so rapidly. Every week there's new AI models released or new updates happening that by the time. I think it's like a continuous improvement process. We've got to be on our toes and be kind of agile in the way that we're approaching this.

Brian Johnson:

It's a great thing. Like you're talking about rebuilding your quality program. I mean, how nice is it like? For when I think about our accreditation program, if you're a new lab coming in, you could use AI to help you write your policies and procedures, and there's really nothing wrong with that. You can eliminate some simple errors and get something that's you know, 80% there, and then all you have to do is go in and make it your, you know, kind of customize it to make it yours and there's nothing wrong with that, right.

Brian Johnson:

But if you don't do that extra 20%, then you're probably going to have some problems because things aren't going to make any sense. But it can get you there and, of course, I could see a situation where we eventually use it to help with audits too. Right, and that could save a lot of time too and be more efficient and perhaps improve the standardization you know, eliminate some of the subjectivity with audits. So like there's a lot of good things that can come out of this on on all fronts. Right, but we just have to figure out how we can use it. Were there, were there any other ways you were thinking of using AI as a tool at the DOT, of using AI as a tool.

Mike Copeland:

At the DOT I built a tool and then wrote a report on it and the whole time it took me was like five hours and this was a couple of weeks ago and just like an afternoon. I kind of timed myself just to see how long this was going to take. But I made a tool that so we have our asphalt testing source document sheets with all the handwritten data on it and I made a tool that's a drag and drop tool, because there's a lot of testing information on there that goes into it like an Excel spreadsheet. It's a drag and drop tool that you drag the, you snap an image of the handwritten dirty sheet, upload it into this app and then the app fills out the Excel file and saves it for you and so automatically. So you don't have to do data entry.

Mike Copeland:

And then you go through and you make sure all the numbers transitioned right, but 95% of the time it's right or it's really obvious when it's not right. So instead of like transposing numbers, ai doesn't transpose numbers. It might drop a decimal or it might just skip over that field, but it's not going to transpose a number. So the errors are easier to catch than typical data entry errors you normally see and it saves so much time. I mean it takes 10 seconds to fill out a form from a source document instead of whatever 30 minutes, 15, 30 minutes, wow.

Brian Johnson:

Yeah, that's great Cause. Then it's like you do that you invested the five hours and now you're saving time forever after that.

Mike Copeland:

Right, Right, Right, and it's scalable. I mean, uh, I gave it to a couple. I gave this app to a couple of people within our group to use uh, like our lab folks and I haven't gotten any feedback yet, but it should save them a ton of time. And now, if we scale this out to every tester in the department, we're looking at hundreds, maybe even thousands of hours saved annually where they're able to do something other than 10K in numbers.

Brian Johnson:

Now, speaking of time savings, I want to ask you about the AI chatbot. That seems like a really good tool for getting answers to people quickly and, hopefully, accurately. Can you tell us about how you developed it and what you intend that to be used for?

Mike Copeland:

I built that maybe a year, year and a half ago, kind of using an older, what would be considered now an older technology in AI, but at the time it was it was pretty new and basically what you're doing is called RAG R-A-G which stands for retrieval, augmented generation. So it's taking all your data, so you put in I put in like all of our specifications, all of our manuals, different memos, all of our research that we've done over the years and put it into like this database thing. And so then when the user asks a question, it goes and searches that database, pulls anything relevant, pulls and then takes all that relevant stuff, the question, and sends it to the large language model. And so now it's responding with all this context. And nowadays it's super simple to set those up. I mean you can take your manuals and set this up and run it locally on pretty much any computer 20, 30 minutes maybe to get something set up like on your own computer, yeah.

Brian Johnson:

And who's using it.

Mike Copeland:

We have it available to, to ITD employees it's meant for, like inspectors, testers, you know the resident engineer. Anyone else that has a question about our specifications internally, at least right now.

Brian Johnson:

Yeah, so this is a closed system.

Mike Copeland:

Currently yeah. Okay, all right, so you're saying currently, what's your plan? I have no idea. I could see where it could be beneficial for all kinds of industry. You know contractors and our consultants and stuff too.

Brian Johnson:

Having like reasonable parameters to look at for data, or being able to save time on data entry, or I mean just there's a lot of good, useful things and being able to ask about about your standards, like your state methods on something hey, what, what's that again? What does it say? How long do I do this for? And boom, you got an answer.

Mike Copeland:

I think that's really handy one of the really cool things about that that chatbot that we have a lot of different manuals and as we were testing out the chatbot and using it, asking questions, we noticed, well, that that's not right. But then it would cite something in a manual that conflicted with another manual, made it really clear like, okay, here's where we have conflicts in our, in our, in our uh, current published documents. That's kind of been an unintended benefit of the whole thing.

Brian Johnson:

That would be tremendous for AASHTO standards to use, like because we do have I mean, you're talking about bulk specific gravity and rice and all these and there are all these like, let's say, an oven or a balance. You know these pieces of equipment that are used in multiple standards. Let's say, an oven or a balance. You know these pieces of equipment that are used in multiple standards. Wouldn't it be nice to say, like is it the same balance or what can I use this balance for? Which test methods? And all of a sudden, you can kind of figure things out a lot faster than if you were pouring through all of these documents on your own. Okay, so we're talking about standards now. So I'm going to ask you another one that is tricky how do you deal with concerns about copyright, intellectual property and personal information when you're using an AI chatbot or any of these AI tools?

Mike Copeland:

I've tried using models downloaded locally, like onto my computer. They're pretty good. Obviously, at that point all my data is localized to my computer so there's no PII risks or anything like that, like ChatGPT or Gemini or any of the others that are out there. I really pay attention to the terms of use and how they're going to use my data or if they're going to use my data and generally the models that I'm using, which are a lot of the different models. They have opt-out options or they won't use your data to use a certain platform.

Kim Swanson:

I make sure that I don't use a model or a site that's hosting a model that that's going to be used my using data for training one of the things that came to my head when, mike, you were saying it's identifying like the unintended benefit of identifying some discrepancies between your materials, or like the manuals and standards and practices is that I think that would be very interesting for, like the Ashton or ASTM method versus the states that have their own methods for something, and to see really what's the difference and what is it not, because I feel like there's a lot of times states are using their own methods when it's really not that different or not different at all from AASHTO or ASTM.

Brian Johnson:

You know, we've got all these different standards, development organizations, including all the DOTs, and if you could dump all those state methods in and say, write a standard that incorporates all these requirements, or give me a document that incorporates all these requirements and maybe it highlights the differences, and then you ask that let's say it's Idaho, say Idaho, do you really care that much about? You know, like, I'll pick some arbitrary thing that we were talking about today and one of my team meetings, which was the length of a spoon, which is an insane thing to specify, uh, a really clear, like it has to be this the many inches long, is it okay? If it's this, you know, like, if there was something like that, it's like okay, how married are you to that length of the spoon? Are you okay getting rid of this and just getting along with everybody else and just saying we don't need to do this? Well, you could probably identify those things a lot more quickly.

Brian Johnson:

Mike, do you happen to uh the chat bot available on your computer right now that we could ask some questions? Oh, where you get to see it?

Kim Swanson:

yes for those watching on youtube. Wow, you are going to have an experience. We're going to actually see the answers so you don't have to listen to it. So shameless plug to go over to youtube so you can see it is.

Brian Johnson:

Does this chat bot have access to your like project data?

Mike Copeland:

I know, not project data oh, not, probably.

Brian Johnson:

Okay, I can't ask this question, then I was going to ask it about a? Um, something on pavements in idaho. Um, does it, does it know anything about? Like, if, if you were to ask it how many miles of asphalt pavement need to be repaved in 2028 in idaho, would it be able to answer that I don't know, let's find out.

Mike Copeland:

Couldn't answer that either, but could I? Can I show you a tool that can?

Brian Johnson:

yeah, how many miles of asphalt pavement need to be repaved in 2028 in Idaho?

Mike Copeland:

I'm adding, use your search tool to find out. Okay, seems like it helps it to remember that it's able to use tools when I do that. So now it's going to search the web. This probably is available on our website. A lot of times I look through the thinking as I'm prompting because I use these interactively, kind of back and forth with the AI. I find that if I look at the thinking or the reasoning of the model, it'll let me go back or it'll help me find holes in my prompt and then I can go back and edit my prompt. I guess after it's done running, it'll let me go back and edit my prompt and like rerun it so I can fix my prompt and try different things and plug the holes so that it gives me exactly what I'm looking for.

Brian Johnson:

Okay.

Kim Swanson:

I've also heard that if you ask like chat, gpt or something like that is like, act as a prompt designer, how would you ask this or how would you ask that like, and that can help you narrow down your your prompts for that. If you just like ask them to act as a prompt designer, then it will help you formulate your prompts better, absolutely.

Mike Copeland:

That works really well.

Brian Johnson:

Wow. So here we go. We got our answer 151.4 miles approximately, with individual projects that need to happen and what needs to be done to it. Lots of seal coats going on in Idaho it looks like it's citing its sources.

Mike Copeland:

Let's check out what it's citing. Yeah, 2025 to FY2031. Hi, tim, that's very cool so went out and found that that's pretty cool.

Brian Johnson:

Yeah, that's a good one. I can't believe it was able to be that exact.

Mike Copeland:

There's a few tools and stuff that I think would be useful to share with people. So, like the other day, we were looking at some split sample comparison testing, trying to figure out the difference between labs, and I wanted to look at the gyratory data a little bit closer. So, instead of like trying to plot out gyratory data, I made a drag and drop tool that you select a gyratory file and plot it out angle pressure moment and I have another version somewhere that, like you can do multiple gyratory files, so I was able to just like create these cool visuals that then I screenshot and put into like my write-up on. Okay, this is why the difference in the test results, because the gyratories are compacting differently. But it's just like a drag and drop tool that I needed once, but now it's pretty handy. You can do this all day long and this took like 10 minutes to build, wow.

Kim Swanson:

That's really cool and you're using like it's. One of the concerns that I have, just as a member of the public, is when you were talking about like the possibility of the people might not actually be formed, the testing, and just use AI to give you answers of like yeah, this is probably what it will be about. Even if it's really accurate, that just like kind of frightens me. Like there's not someone actually testing it. But if this is just taking the data from the test or the results from the test and giving you a different way to look at it and interpret it, that I love. But when you were talking about you know, like it's guessing, like what this is. It's really accurate. I'm like, oh, that seems really not great, but again, I don't really know anything. That just is like me being scared of you know a bridge falling or something I don't know, like that kind of stuff.

Brian Johnson:

Can we go back to your in-house one for a minute? So I guess these models they're able to pull information that you've given them, but they aren't necessarily storing their own information.

Mike Copeland:

Correct, correct. Yeah, it doesn't this tool. It doesn't go out and search the Internet or anything like that, doesn't go out and search the internet or anything like that. It's answering just based off the documents I gave it, which is like our standard specifications and our quality assurance manual and our contract administration manual, things like that. So we could ask it about like past research project and what the findings were. We could ask it. You know something about aggregate requirements and it will know those things, but it won't know anything that another user is asking.

Brian Johnson:

Oh okay, most of my questions are dumb questions that I thought would be funny to ask, so I don't really have anything interesting left to ask about the AI chatbot. Now that you've started using all these tools, I imagine that you have had interests. Well, I don't know how many people know about what you've been doing, but are you getting questions from other DOTs, from other departments within the state government of Idaho, like, hey, can you help us with this? Can you tell us what you did? Are you being inundated with a lot of these kind of questions now?

Mike Copeland:

Are you being inundated with a lot of these kind of questions? Now, not a ton. I've definitely talked to other DOTs around the country and some university research groups like how I'm using AI, how do we get into this? Just use it. But yeah, there's definitely been a lot of conversations with different groups, which has been really fun because we share what I'm doing and it's really interesting to hear what they're doing, because, I mean, I'm no expert, I'm treading water and drinking from the fire hose, and this changing every single day.

Mike Copeland:

It's always good to hear like okay, how are you using it, what are you using it for and what's successful for you? Uh, like the other day, uh, someone was telling me that they were using it to, instead of writing down test results, they were using it, as you know, dictating to it to like, okay, here's my uh with my rice bowl and saying it out, saying it out loud, so not having to walk over to the pen and paper every couple seconds. They're like it saves me hours every day. I'm like, oh, that's cool, I hadn't even thought about that. Yeah, it's good to have conversations with people and see how they're using it and share what you're doing or what they're doing. It's also new.

Brian Johnson:

Absolutely, and we are going to be seeing you soon at the AASHTO Committee on Materials and Pavements meeting in Hartford, connecticut, and I think we're going to be talking more about this and hopefully we can find out if there are any other people like you in your position at the other states that are also messing around with this and see if we can start to get some best practices together and maybe even talk about it at the next AASHTO Resource Technical Exchange. Maybe even talk about it at the next AASHTO Resource Technical Exchange. I believe that you are going to be having a conversation with Bob Lutz of our office about potentially doing that. Hopefully we can get something going and I think that would be a really interesting topic for everybody there. So for those of you out there who listen to this, who also attend the technical exchange, that might be a good session in Kentucky in 2026. So stay tuned for that and, kim, any last questions that might be a good session in Kentucky in 2026.

Kim Swanson:

So stay tuned for that and Kim. Any last questions? No last questions, but I'm going to start the plug early for the 2026 AFSHA Resource Technical Exchange, which will be March 9th through 12th in Louisville, Kentucky, and we're having a virtual technical exchange November 5th and 6th. They're both half-day events and there'll be more information on our website about both of those events at ashtoresourceorg slash events. Here's your quality, quick tip of the day. A common problem with QMS documents and records is that they're out of date. It may help to enter due dates and automatic reminders into calendars to help keep you organized on time and in compliance. You can learn more by going to the ReUniversity section of our website and check out the Road to Developing an Effective QMS Articles for more information on this topic.

Brian Johnson:

All right thanks, and Mike, thank you so much for your time today. Good luck with all your future meddling with the databases and figuring out new AI tools. I have a feeling that all of the time that you're investing now is going to pay off for a lot of people moving forward very soon. Yeah, it's been fun.

Mike Copeland:

Thanks for having me.

Kim Swanson:

Thanks for listening to AASHTO Resource Q&A. If you'd like to be a guest or just submit a question, send us an email at podcast at AASHTOResourceorg, or call Brian at 240-436-4820. For other news and related content, check out AASHTO Resources social media accounts or go to AASHTOResourceorg.