Digital Forensics Now
A podcast by digital forensics examiners for digital forensics examiners. Hear about the latest news in digital forensics and learn from researcher interviews with field memes sprinkled in.
Digital Forensics Now
AI in Court: Testimony or Tech-tastrophe?
Could AI in forensic analysis be more of a liability than an asset? Join us as we explore this pressing concern.
We kick off this episode with an important update for those dealing with Android extractions. Recent changes to the Android OS and Google Play Store might be causing the Keystore (secrets.json) file to either miss data or not be extracted at all. This brings attention to the vital role decryption keys play in accessing data from mobile devices.
Next, we dive into advancements in forensic tools like MSAB’s new RAM analyzer for XRY Pro users.
For iOS investigators, if you’re working with Cache.sqlite data, you’ll want to check out iCatch, a tool designed to map the data efficiently and streamline your workflow.
Shifting to the role of AI, we examine a recent legal case that highlights the dangers of relying on AI-generated results without proper verification. Accuracy and repeatability are key, and our discussion focuses on the ethical implications of using AI in forensic investigations. We emphasize the importance of thoroughly validating AI tools to maintain trust in the legal process.
Notes:
Updated Telegram Policy
https://www.linkedin.com/posts/luca-cadonici-41299b4b_policy-telegram-cybersecurity-activity-7244258209979334656-AxPlhttps://telegram.org/privacy#8-3-law-enforcement-authorities
MSAB RAMalyzer
https://www.youtube.com/watch?v=1SEgSYSF03A
Expert witness used Copilot to make up fake damages, irking judge
https://arstechnica.com/tech-policy/2024/10/judge-confronts-expert-witness-who-used-copilot-to-fake-expertise/https://law.justia.com/cases/new-york/other-courts/2024/2024-ny-slip-op-24258.html
iCATCH
https://github.com/AXYS-Cyber/iCATCH
Welcome to the Digital Forensics Now podcast. The year 2024, and my name is Alexis Brignoni, aka Briggs, and I'm accompanied, as always, by the examiner from the frost up north, that tundra up there, the complainer in chief, the one, the one that will wear her red shoes on Tuesday that knows that there's no better place than home, the one and only, or no place like home, I should say the one and only, heather Charpentier. The music is Hired Up by Shane Ivers and can be found at silvermansoundcom. Let me tell you you got lucky that I forgot to take the overlay so people can see your face. So I totally forgot, that's fine, that's fine.
Speaker 1:And the abrupt stop in the music Boom Done.
Speaker 2:That's all right. That's all right. We don't want any more music right now.
Speaker 1:Hey Heather, what's going on?
Speaker 2:Oh nothing. Thank you for the great introduction, as usual. I don't think anybody probably knows what the heck you were referencing, but that's okay.
Speaker 1:Yeah, because you're going to tell us right now oh, all right.
Speaker 2:So, yeah, it's cold. It's cold in New York and I'm being picked on because I have to go out in the morning in the frost and my friend here is in Florida where there is no frost friend here uh is in florida where there is no frost.
Speaker 1:So hey look, I mean, I mean we have, we have. We did a lot of crazy stuff here, but at least the weather tends to be more kind, at least, well, at least sometimes, not always, oh, yeah, I think you might have a story or two on that. Um no, but tell us tell us more, explain more, so okay, so it's cold up there, everything's frosty and what's going on?
Speaker 2:30 degrees. This morning I was freezing to death, and then the no place like home comment with my red slippers. Yes, next week I'm on vacation. I'm so excited to go on vacation. I am taking my sister for her 40th birthday. She's getting old.
Speaker 1:Wait, wait, wait. I take an objection, your Honor, about the whole being 40 and old.
Speaker 2:comments I'm older than her, but she's joining me in the 40 club now. How's that?
Speaker 1:Well, I mean you could be 40 and old, or you could be like me and be 40 and not old. I'm just saying.
Speaker 2:Oh, if you say so, if you say so, but we're going. So my sister is a complete animal lover and we're going for her 40th birthday to a wildlife park in Kansas, which is where I'm going to get no place like home, and it is actually like an interactive wildlife park where you can interact with the animals. We're going to swim with penguins, we're going to meet and greet lemurs and anything else. You can think of Giraffes, my favorite. So we're doing that next week and I'm super excited.
Speaker 1:You're like New York Snow White. I think I told you that already. Did I tell you that already?
Speaker 2:You did. You did the bird cam the bird cam.
Speaker 1:There we go. But you're also into the horse thing, the horses thing that you did last year and now it's the penguins and lemurs and stuff.
Speaker 2:I love animals.
Speaker 1:You're like Snow White, like legit.
Speaker 2:I like animals more than I like people.
Speaker 1:I think that's something that we can all agree on.
Speaker 2:It's easy, and my sister loves animals even more than I do, so it should be a good time next week. I'm very excited.
Speaker 1:Oh, yeah, yeah, I'm excited too.
Speaker 2:Oh, yeah, yeah.
Speaker 1:Oh, I saw it. I saw it. Thanks, Kevin, for the insight Now. So, yeah, read it, read it, read it. So the people that are listening Ask her about goat yoga.
Speaker 2:One of my lovely co-workers has chimed in. Yeah, I mean, goat yoga is awesome.
Speaker 1:If you haven't had a chance to do goat yoga you have to go into the, the studio and look, I'm gonna assume goat is the greatest of all time. You're like the greatest yoga I've ever done no, no.
Speaker 2:You go do yoga and you don't really do any of the yoga because you're too busy playing with the cute little goats that jump up on your back as you do the yoga poses.
Speaker 1:It's so much fun oh, wow, it's like uh, those massages that they walk on you, I guess.
Speaker 2:Yes, and they're so cute, but then they try and eat your hair and your clothes and they poop on you, so it gets a little messy.
Speaker 1:Yeah, I don't know you're selling me that part as good. Now it's fun.
Speaker 2:I'm telling you, it's worth it.
Speaker 1:Hey, it's always fun to get pooped on. My kids used to think so when they were little. Anyways, well, that's awesome. That's awesome. Uh, the new york snow white, that's great, and I'm looking forward for. You know you work hard, so you deserve to.
Speaker 2:Uh, to have a vacation and have animals poop on you, that's great well, thank you, I'll bring back pictures not of the poop but of the animals.
Speaker 1:So well, well, um, uh, what can I tell you? You know we're talking about how it's cold up there and not so cold here, but oh wait, kevin just said something. No, it's not, trust me. I guess Kevin has had some experience with that, with the kids. He's having all that parenting experience. This is great. We're definitely bonding over that.
Speaker 1:No, but Kevin was saying that he'll take the snow and ice over hurricanes, and you know that's uh, I don't know about that, I guess it's, it's it's about taste. Oh, now, now I'll be, I'll not get serious. Look, we had a, we had a hurricane in florida last week, melting, as I mean most people watching the news and uh, it was serious for the west coast, for tampa, a lot of flooding, you know, a lot of you know. So it was serious. I won't deny that and that's not something to make fun of, right.
Speaker 1:So there's some risk to living here, but there's risk to living everywhere. If you're in san francisco but in california, you might get earthquakes. In some other parts in the, in the mountains, you might get wildfires, um. So there's always risk wherever you live. But I guess you pick, you pick your, your the risk that you're willing to take on, right. So I'm okay with the hurricanes. We got lucky here in Orlando last week. My opinion again is from being up in Seminole County, which is kind of north of Orlando. We only got like some wind, some rain, but it wasn't as bad. Like Ian a couple of years ago, I felt it here to be worse, like more wind and more rain. That's just me here.
Speaker 2:Yeah.
Speaker 1:So so you know, I didn't even lose power. I had like the light flicker across the night like four or five times and thankfully nothing got fried with the flickering that's good, that's good yeah but we didn't lose power so I didn't get to use my generator. You know I get excited. You know thing, you know but, hey look, andrea. Andrea is in the chat yeah, hello she's awesome, good to see you here.
Speaker 1:Um, so yeah, so no, we're lucky, nothing happened to our house and uh, you know things are recoveries be moving along and uh, and hopefully the tampa bay area, uh, you know, kind of from cape coral for myers up north, all that coast hopefully they get the power back soon and and recovery efforts are successful and quick. So yeah, I have a good friend there that get the power back soon and and recovery efforts are successful and quick.
Speaker 2:So yeah, I have a good friend there that has the power back now in tampa area, so yeah, that's awesome.
Speaker 1:That's awesome. They're doing good. They're doing good work. So, yeah, so a lot of things coming. A lot of things happened last week and uh, but now, uh, let's get to the meat and potatoes of of of the show. So what's what's going on in the last couple of weeks that we need to let folks know about?
Speaker 2:We have a public service announcement about your extractions. That's where, um, you can find like updates on like things that are happening with uh the celebrate tools, but also with extraction issues or other other types of digital forensic issues. Um, and there's an announcement in there that uh recently discovered there's an issue where android keystore is not functioning on some devices. Um, it may have an empty key store file and that file is called secretsjson, so it may cause either an empty file or the file not being present at all. They're saying that it's due to external factors and it's primarily affecting devices running Android 15 or ones with newer version of Google Play services. So keep an eye on your extractions. If you're missing that key store, you may be missing data like sessions, or if the Samsung Rubin, if that needs the key store to decrypt that data, you'll be missing that. If your key store is not extracting properly or not even extracting at all.
Speaker 1:Well, and correct me if I'm wrong, but some chat applications use the key store to decrypt, and I mean encrypt and decrypt the messages, right? Yes, and you know. This is the thing. Right, we're used to running the tool and if we don't see chats for the particular application, the assumption is there's no chats. Well, don't see chats for the particular application, the assumption is there's no chats. Well, that cannot be the assumption, right? You need to at least give it a quick look.
Speaker 1:And we discussed previously in other shows and we need to do it again for next year a methodology and what you should do when you work on phones, right, and one of the things that I suggest folks do is try to at least ascertain what has been parsed versus what hasn't been parsed. Right, and because the tool won't tell you. The tool tells you what it got, it's not going to tell you what it didn't get. Even though it's there, it's not going to tell you that. So I'm going to make one up, not make one up.
Speaker 1:Let's say signal. Let's assume signal uses the key store, for example. Right, if the key store is not there, the app will show no signal chats at the chats from that app, but it won't tell you that they're there, it's just that I can't decrypt it because I don't have the key. It's not going to tell you that. And that's important because, let's say, this issue happens in your case and there's some chatting applications are encrypted, you don't have the key store, you cannot get to them. But let's say there's a patch later or something the tool gets updated that they're able to get it later. If you you don't take note of that, you won't have the knowledge to go and maybe re-image that again or pull the key store out when there's support for it, right, right, so you have to be really aware of what the tool not only what it's showing you, but also what the tool is not showing you, and this is one good example of why we need to do this.
Speaker 2:Right. Celebrite put out in in their announcement too, that they're working on a solution currently for this, so hopefully other vendors are as well. Um, other vendors that extract data, um and uh, their suggestion is if these applications are are relevant to your case, right? So if you're looking for that samsung rumen data or sessions or whatever may not have decrypted, hold on to it. When that that fix comes out, you'll have to re-extract it for that data.
Speaker 1:Yeah, exactly, and that's the whole point. If you don't know that that there's stuff that was left over, how will you know? You need to re-extract it when the solution comes. And again that speaks to take some time to understand what's happening with your tools, how your devices work, because most examiners I'll be straight they'll just go and parse it. Here's what I got. Were there any signal messages? Let me see. Do I see signal here? No, there's not. No, there are. It's just that they weren't decrypted. So you know, you got to be really careful there.
Speaker 2:Yeah, definitely, definitely. I would say too, look. I mean, depending on what tool you're using, look in the log files. Or, specifically to PA because I use Celebrate tools a lot that trace window. You can look in the trace window and if it's not decrypting something because it can't find the key store or doesn't have the key store, it'll say it right in that trace window yeah, I don't remember exactly what it says, but it literally tells you that it can't decrypt this database because it doesn't have the keys.
Speaker 1:Whenever I use PA to do anything, the trace window is always open. I cannot use it without that thing being open period.
Speaker 1:Yeah, I agree, that's just how it is and even the tooling that I kind of lead on with the leaps and stuff. That's why we put those errors in front as it's processing, because I believe we need those to make sure that we can follow up then on things. And you know, quick related point right, and that's fine to have the trace window and that will show you if there's a problem with things that the tool is supposed to get right, but it's not going to show you a problem with something the tool doesn't know how to get because it's invisible to the tool, right? So that's why it's still and I mean, of course, do what, what we do, use the trace window, for sure, but always take a few minutes, it's not long. If it's an android device, go to the data data folder in if you have a full file system extraction on android and browse quickly all the directories there. Those directories are reverse url bundle, like I'm going to make one up, comsignal, whatever, right and you can tell usually what it is. So at least you can look and visually check just in case, because if it's not parsed by the tool, it's not going to show it on the trace window.
Speaker 1:Also, look at the artifacts of installed apps that the tooling provides you, but still, I always like to look at it myself, like I don't like trusting the tool just showing me what's installed. I'm going to look myself, right, yes. And what I like to do is I either run the applicationstatedb parsers for that and, long story short, what applicationstatedb does is it correlates the apps that are installed and where they are in your device, all right. And I also run an ILEA, the tooling that we all put together, one that looks for some metadata P-list in each app directory. Even if the app's deleted, that playlist might be there as long as garbage collection hasn't started, right. And you get a list of all the apps that are installed and possibly apps that were uninstalled fairly recently, right, and I always do that because and again, it only takes a few minutes Sounds like a lot. It takes a few minutes because I want to have that knowledge, to have a good situational awareness of the device I'm working on and what other work might be pending in the future. Does that?
Speaker 2:make sense, Heather? Yeah, it definitely makes sense, and yeah, it's super quick to run the leaps for looking for that like seconds, oh yeah.
Speaker 1:And you look at the report and you quickly see the app Boom, boom, boom, boom, boom, boom, boom. And I have a couple of cases where I could show that an app was installed. The app was not there anymore, but the evidence that it was installed was there in that metadata playlist and it was really important for the case and some tools. I don't think they actually show you that as a report. So it's on us, it's on us to make sure that happens. Just real quick, pop it from Melbourne all the way out in Australia. I don't know what time it is, but I bet it's either really late or really early, hi. So thanks for hanging out with us at the opposite time that we have here. So New Zealand from Wellington.
Speaker 2:New Zealand Nice.
Speaker 1:One of my favorite places in the world is New Zealand, and it was a struggle to get there. It's not New Zealanders' fault, it's the fault of the United States aviation system and Carl's Strike, but one of my favorite places to be is New Zealand. So good to have you here.
Speaker 2:Yeah, definitely. I'm still jealous about your New Zealand trip, so we'll just leave that at that.
Speaker 1:Oh, my goodness. Well, maybe in the future they have another event and we'll, we'll, we'll drag go, yes, yes I'm in, for next time I'm gonna force my way in.
Speaker 2:So, um, all right. So telegram updated. There's policy for telegram. That's been updated. Um, I think it was uh on luca katanishi's linkedin that first saw this. He shares some really good updates in the digital forensics community, so if you haven't connected with him on LinkedIn, make sure you do. But Telegram updated their policy to share IP addresses and phone numbers with authorities if there's sufficient evidence of involvement in criminal activities that violates the platform's terms of service. So I think that's kind of opposite of the way a few other platforms might be headed, but they are now alerting authorities if these terms have been violated.
Speaker 1:As always, none of the things that we say here represent our employers. We don't speak for them at all and we are opining as another community member, right? And that being said, I I think it's pretty obvious that when the ceo gets arrested in france, yeah that might spur some changes, like again, we don't know for a fact that was spurred it, but I think it might be reasonable to assume yeah, he's got a couple things, a couple things that make him look good coming up in these months.
Speaker 2:I think yeah yeah, your Honor.
Speaker 1:Look, we are totally compliant with the police officers.
Speaker 2:Now you know they do point out in their updated policy, though, that they are not sharing user messages with the authorities. So I mean, ip address and phone numbers are what you're getting.
Speaker 1:Yeah, I mean up front, I'm ignorant in how Telegram works on the backend. Obviously I haven't really done any research on it, but most chat providers they're doing that now right, they make sure the encryption is handled by the endpoints and not them.
Speaker 1:So, then they can claim hey, we don't know nothing about it, but at some point one would assume that some sort of identifiers must exist and in this case it's pretty obvious that the IP address and phone numbers which I believe are used to register. Now, does that mean that criminals are not going to try to do something else? Get behind VPNs, use SIM cards, the burners for sure, right. But just even having that data, even if it's fake or not, or fake or whatever, might lead to further investigation. So I mean there's the balancing between privacy and the responsibilities we have, you know, as citizens and as law enforcement, to protect, you know, law abiding folks. It's something that we'll keep you know, kind of going back and forth on, but I think it's a good thing and it's a positive development. Telegram is a really popular worldwide chatting application and among other things. So, uh, this is a good, a good uh, development. Real quick. Um, jessica is heading over to australia.
Speaker 1:jessica, hi, good friend, friend of the podcast, so can I go with you I bet she's staying up late so she can start getting into the time zone over there, right? Yeah, and Bo Dissel, who was in my class. So thank you for being there Again. I'm happy that it was of use the class that I gave on mobile forensics Not me, well, me, yes, but also two more instructors, right. It was a great event. So thank you for being there as well and paying attention to my blathering.
Speaker 2:Awesome there paying attention to my blathering Awesome. Uh, there was also a recent um posting. I always see everything on LinkedIn, so but posting by MSAB. They have um. Xry is their tool. They have um, an early release for what's a tool called Ram analyzer. We've talked about Ram on a previous episode. I think it was one of our really early episodes where XRY actually has the capability of pulling RAM from certain devices and they're planning on having the capability for additional in the future. But the new tool developed to help you analyze and make sense of RAM dumps for mobile devices, it is specific to XRY Pro users, so you'll have to be a Pro user to do that and I have to share the picture they use with their announcement because I think it's funny. So they put out their announcement, but the picture is a RAM analyzer with a picture of a RAM with giant horns. That just says nice, with a mobile phone and a smiley face. Get it. Get it RAM. Got it with giant horns. That just says nice, with a mobile phone and a smiley face.
Speaker 1:But I don't know, Get it, get it.
Speaker 2:RAM Got it. But if you haven't had a chance to check that out, definitely check it out, because it's really cool. I've had a chance to check it out and it's really cool.
Speaker 1:No, and I wish that. I mean, I don't have this knowledge and I don't think you're going to acquire certain knowledge, that knowledge, but the knowledge I'm talking about is how to decode things in RAM. Right, I'm not a RAM expert, like by any stretch of the imagination. The best I can do is pull RAM from a Windows computer and then, you know, chug it over to somebody else that knows what they're doing.
Speaker 1:Can I run volatility commands? Sure, I can do that, right, um, but I'm not really in-depth memory guy, right, but I, I think, would be really good to kind of, you see how developed, uh, analyzing memory in windows is, right, pretty advanced volatility tools, folks that specialize on that. Um, hopefully something like that starts to develop for android devices because, by the way, this is an android device thing. Right, you're not getting RAM from iOS, you're getting this RAM from certain Android devices, as support is provided, and hopefully we get to that level when we have like a little which I think they're kind of trying to build right, like a little utility for Android RAM, and hopefully folks can go and say, hey, look, these are the structures and this is why they're important, and go from there. So that's something I hope in the future.
Speaker 2:Previously on the RAM, when we did the podcast about the RAM, there were questions too like, oh, but you're not going to get deleted data. And actually I found one artifact in the RAM that was deleted data. It still remained in the RAM. It was a Samsung Note, because I was doing it on my Samsung Galaxy and it was a Samsung Note that I had deleted and I actually found the content and timestamp right in the RAM. Still, it still resided there.
Speaker 1:So there is potential yeah. That's amazing. And if you have a super big case and again we don't we're not haters of any tools, but we're also not shills, right, we just tell you what we see right and in this case we have an important case and you might need to get this tool and check out memory, because memory might have something that's relevant that you might not find anywhere else.
Speaker 2:I have to share this comment because I think I'm going to do it. Jessica says if I get to the airport by tomorrow morning, she'll put me in her suitcase in the future. I'm not sure I'll fit in your suitcase, but maybe.
Speaker 1:I think you do. I think you do now, you think so. Yeah.
Speaker 2:Maybe there's a slight chance. I'll have to kick all her stuff out of the suitcase, so I'm going to guess she needs clothes to go over to Australia.
Speaker 1:Look, just put them all. You, you wear them, you wear them and they go in, get in um on the ram analyzer.
Speaker 2:Quick before we, before we move on to the next topic, though, too, there's a youtube video that um gives a brief uh explanation of how it works and, um, I know, in the release notes for the xry version that supports it, there's uh more details on how to actually process it and how to kind of parse it and view the data.
Speaker 1:So, uh, if you have the the capability of looking at that, go check out the youtube and then check out the um release notes yeah, and as always, we'll put the this on the show notes and also for the folks that are listening, the show notes from the podcast. They'll be there and also in the blog for the show.
Speaker 2:Yes, yes, okay, so this one, I love this topic. Yeah, so everybody that's listening right.
Speaker 1:You can, so you had a chance to go to the bathroom previously, but you lost it right? This is a topic that I think is gonna be, you know, a topic that we're gonna appreciate, you know, yeah so this is actually.
Speaker 2:There's been a lot of chatter around ai. We've talked about ai, I don't know how many times already. I think we're probably going to continue. Um, but so there was an article that came out and the title of the article is expert witness used copilot to make up fake damages irking judge.
Speaker 1:Oh, my goodness, and I've been on this AI binge. I say binge, but in the last three or four days asking to the community through LinkedIn all these questions about AI and the use of new forensics, a lot of folks commenting, and then this thing came out right, so it was totally up our alley. So what happened with that? Tell us the story about the expert witness using Copilot.
Speaker 2:Yeah, so it's actually a court that's fairly local to me. The decision came out of Saratoga County Court, which is about a half an hour north of me. It's in between my house and my parents' house and actually we work with that court all the time. But that's kind of besides the point. They ruled that the use of AI as a tool to assist in preparing an expert damages calculation should be the subject of a FRI hearing. So if you don't know what a FRI hearing is, that is, to determine admissibility prior to, like an anticipated trial. Um this case, long story short, um involves, like a dispute over property and um an expert was brought in to calculate, uh, some numbers for damages related to this dispute, and the expert used Microsoft Copilot to calculate the damages and submitted the report directly from Copilot and did no additional work.
Speaker 1:I mean he couldn't use ChatGPT. Come on, it's a joke, people. It's a joke, it's a joke.
Speaker 2:Would it have been better?
Speaker 1:I don't know.
Speaker 2:So I guess so specifically in the presentation and review of evidence and documents he used this. You cannot trust that to write your report for you without any additional verification. The court I'm going to put up a picture of of something the court said here. Well, let's see, and I'll read it to everybody. But, um, so hold on the court. Uh, so perhaps the son's legal team wasn't aware of how big a role co-pilot played. Um shoff, who is the the judge, noted that ransom, who is the expert, couldn't recall what prompts he used to arrive at his damages x, his damages estimate. The expert witness also couldn't recall any sources for the information he took from the chat bot and admitted that he lacked a basic understanding of how co-pilot works or how it arrives at a given output.
Speaker 1:Wow, yeah, no, and it gets better right.
Speaker 2:Yeah, so I mean, this is-.
Speaker 1:But wait, there's more.
Speaker 2:This is awful in itself, but apparently, according to one of the articles we read on this, the court entered some prompts into Microsoft Copilot, to kind of like do some testing of their own they. One of the prompts they entered is can you calculate the value of two hundred and fifty thousand invested in the Vanguard balanced index fund from certain dates? It returns a value, right. So then they ask the question again, just in a different way same query, but a different way, and they used a different computer. I'm not sure if that would have mattered or not, but it returned a completely different number with this, basically the same question. I mean, these two numbers were shown to be close to what the expert had in his report, but they weren't the same.
Speaker 1:Like they're different Define close right.
Speaker 2:Yeah.
Speaker 1:No, and for people to understand when it says the court did this like it's the judge, right? The judge, I guess, whipped out a laptop or a computer and went. You know what I'm going to do with myself. They just started, you know, clicking at the thing. Are you kidding me? Like I would like, if I'm this guy, I'm, I'm dead, I'm already dead. They can pronounce me dead on the spot. They can just don't call the ambulance, just call the funerary carriage and just take me, take me to the, to the. You know to where the dead people go. I don't even know how his name in english, but are you kidding me?
Speaker 2:oh my god, that's crazy the fact that there's any variation in the number, it'll calculate with the same type of question or the same data posed in maybe a little bit slightly different question. I mean, I can't stop using this for court people, if you are.
Speaker 1:Well, I mean. Well, I mean I want to say so much, so many things at the same time. Can I, heather? Can I say a few things? Go, go, okay. Can I heather? Can I say a few things, okay, so, so there's a thing. Right, we're at the point. There's a thing you can say, and actually I'm gonna. Bread is on the chat and bread as another friend of the show, and not everybody's a friend of the show, these are really special people, right, and I'm gonna read his comment. Right, some use ai to streamline their investigations, create creative insights and then validate everything ai suggested, although use ai because they're lazy, right, yeah, now, that being, said let me make some comments on that.
Speaker 1:I'll leave this best comment there, because, and then validate everything AI suggested. Although, use AI because they're lazy, right, yeah, now, that being said, let me make some comments on that. I'm going to leave this best comment there, because this is something Brett and me when talking in LinkedIn, and some other people. Right, I'm thinking about a couple of things. Right, yes, you can use AI to streamline investigations, but at some point are you? Because, if you cannot trust a single thing the AI says, because there's randomness built into the AI. That's what the AI is. What makes it generative is a bit of randomness, and in our scientific neck of the woods, we don't want randomness. We want repeatability. Right, we want things to be repeated. We want to know inputs and expect outputs and, like the judge did, the judge asked the question and the chat GPT, but co-pilot or whatever the A was gave you different answers. Right, that's tough, because at some point, if this is a small data set, you can say, okay, I can check a small data set, but what if the data set is large enough that the generative AI gives you all these sorts of outputs? Right, is it really worthwhile for me to go through the AI if I had to verify every single thing, because there's no validation of the tool right, the process itself, it's a black box that's unknowable to me. And so then, what right? I cannot guarantee that I can verify the inputs with the outputs, especially when I don't know what the inputs might be right, because I'm doing it through the lens of the AI, right, and I'm still on the fence.
Speaker 1:Oh, jessica's doing a great comment. We're going into that, jessica. I'm still on this. And look, I don't have a solid opinion on AI one way or another as a concept, because I see a lot of utility. I'm actually making a presentation for ECPI College by next month about AI and some things to consider, both pros and cons, for JTAL forensics. I'm working on that presentation right now and I'm still working through those things in my mind. Because do I really want to go with AI to the court? Because let's think about a couple of things right, when you go with your process to court, right Before you get there, heather, is that something? Do I hide the ball and keep it to myself and just spring it on people in court? How's that work, heather?
Speaker 2:What do you have to do first. Yeah, no, you can't just spring it on people.
Speaker 1:No, I mean you have to go as a process called discovery, yeah, and you need to provide it to the other side. And I say other side, it could be the defense, it could be prosecution, it could be two parties in a civil case. So the question is well, I tell them I use AI, and then on me, on the other side, I'm going to ask well, what prompts did you use? What were the responses? What responses did you consider relevant and which ones you didn't, and why? So you validated a few of those. What tells me, guarantees me that the use of this tool, your validation, was properly done?
Speaker 1:Considering that the tool gives you wrong things, let's go even further. Right, with enough big of a data set, the tool will make some interpretations, because the whole talking to it, right, the tool has to interpret dumbly. And I say dumbly because I recently a certain research done that llms generative ai. They don't think like people, think they're really dumb at math and other things, because it's not really thinking as we think. They seem like they're thinking, but they're not right. So the system will try to understand in their own way what you're trying to say and we'll respond to that. And not everything is black and white, some things are based on interpretation of. And then you say well, I'll check it. Well, how much are you going to check? And I'll even say more Right now, people don't want to look for unparsed apps on phones.
Speaker 1:They don't want to do it. They say the tool does it. And if the tool didn't do it, then it's not important. Do you think they're not going to do the same thing with AI and just calling the examiner lazy? And again, this is not a dig on Brett, this is just a different perspective that we're sharing, right? Because what Brett is saying I agree with him 100%, 100%, he Correct. I'm just providing another perspective on top of that, right? If I assume that they're lazy which they are, by the way that does not solve the problem, because people, the tool will provide them that and they will do it. They will just copy paste it out because the tool gave it to them. And tool vendors are really focused on tool capabilities and they're never focused on tool limitations or they just do enough to say that they do Right? Am I lying here, heather? Am I out?
Speaker 2:of base. No, you're not lying. You're not lying at all. Just to go back to your discovery comment too, though, like if you're not documenting the prompts that you're putting in, keeping notes on that and submitting it for discovery, that's going to be a violation of discovery. I can see that being thrown right out completely.
Speaker 1:All of the work that you've worked on, absolutely. And look, do we use black boxes already? For sure, there's a lot of proprietary data on how the tools work, right, but the fact that the tools, there's some sort of validation of that process from the vendor side and then for yourself as an examiner, at least you can say look, I expect, when I put bananas and strawberries and peaches on this thing, I expect a fruit salad on the other side. You know, and I seen that repeated enough times, that I know it's going to happen. Therefore, when I put other fruits that are unknown to me, if I see a fruit salad, it's because the inputs are fruits, right, can I do that with AI? Is there a process to do that? And maybe there is.
Speaker 1:I'm ignorant, right, but I don't think we can compare it and make a one-to-one comparison to our verification and validation processes and current tools and then export that like that completely, and then say it applies to AI. And I'm having some conversations with people way more smarter than me, for example, like Jessica Hyde. She has this deep knowledge on how some of this AI stuff works and I've been so happy that she has some time to speak to me on it. I don't understand it yet, but the fact that there's other ways of validating and verification processes that need to be applied to these systems, and I don't think we are really doing that as a field, and that's why we're having all these troubles and problems when we try to bring it into court.
Speaker 2:And all this nonsense happens right.
Speaker 1:Let's, let's, let's. I mean before I say I get it, I get on my horse. Do you have something else to that?
Speaker 2:particular court continued to ask questions to copilot. One of the questions was are you accurate? Copilot generated the following answer I aim to be accurate within the data I've been trained on and the information I can find for you. That said, my accuracy is only as good as my sources, so for critical matters, it's always wise to verify. The court followed up with a similar question. Instead of are you accurate? It asked are you reliable. It got a completely different response. Co-pilot responded with you bet. When it comes to providing information and engaging in conversation, I do my best to be as reliable as possible. However, I'm also programmed to advise checking with experts for critical issues Always good to have a second opinion. And then there was one more that I highlighted additional follow-up question that asks are your calculations reliable enough for use in court? When asked, co-pilot responded with when it comes to legal matters, any calculations or data need to meet strict standards.
Speaker 1:I can provide accurate info, but it should always be verified by experts and accompanied by professional evaluations before being used in court. Yeah, the AI is more aware of the limitations. Yeah, it's more aware than some of the experts that use it. That's insane, definitely.
Speaker 2:But I just I find the difference in answers between are you accurate and are you reliable, like I mean I feel like I should return the same answer on that, but it was similar, I guess.
Speaker 1:I mean, it's not the best analogy, but at Atlanta what they do is imagine a train right and there is no train tracks. Right, the train is going at full speed and the train is putting a train track in front of the next one as it's going really fast. That's what a lot of elements do right. It kind of builds the train track as the train is moving right, or it's building the plane as the plane is flying and based on a background of information of what flying should be and how should you fly or how the train works. Right. Which leads to the point that Jessica made in the chat a second ago.
Speaker 1:She says that I'm sorry, I put the bad, did not the right comment hold on? She says validation, the results don't remove the bias right, because you only show affirmative responses. And I want to unpack that real quick from what I understand what she said right. First of all, how are these systems trained? Where did the data came from? Right? Does the data? Okay, wait, stop, stop the presses. Can you put your mug up to the screen again so we can bask in the glory of your cup of your mug?
Speaker 2:I just wanted to drink.
Speaker 1:We need to show your mug. Your mug says protect, attack, take naps. It has baby Yoda Grogu in her big cup.
Speaker 2:It is a big cup.
Speaker 1:Yeah, we need to mention that it's bigger than your face. It's great, all right, thank you.
Speaker 2:Thank you for pointing that out. Back to my rant.
Speaker 1:Okay, so, yeah, so how are these systems trained? Based on what data? Right, If you give a data in regards to a particular segment of the population, that means that other segments of the population are not representing the data set and you're going to get really problematic results from it. Right, and you know, especially if you're only focusing on the answers that you care about, what about the ones that you don't right? Well, this one works for me. Well, that bias that's built in of the data set that was tested with you don't even see it right, and that's another big issue that we need to talk about. Right, and that's not even talking about saying, okay, the LLM was trained with data from the internet, right, but some of that data they took it without the owner's permission.
Speaker 2:Oh yes.
Speaker 1:Okay, so imagine this. This is my analogy right. Imagine I go to court and I use all these expensive tools but I did not pay for them. I pirated them. Or I pirated my office on my Windows copy that I did my examination on right and the defense get a hold of that. What do you think is going to?
Speaker 1:happen, you know you know what I might be? I might get prosecuted for it, right? Like we expect to do, to do legal work in a legal way, Right? So what happens with some of these LLMs are fed on a way that's not transparent, in a way that's obscure or not even respecting the rights of the creators of the original content, right? How's that bad apple, that poison apple, filtered down to my case if my case is built on an LLM that uses data?
Speaker 1:And I'm saying that because I don't know, I don't have the answer. I need some legal minds to start thinking about those things. Right, I don't have the answer. I need some legal minds to start thinking about those things. Right, because we're tasking the AI based on knowledge that it shouldn't have. Even more so, let's discuss about knowledge that we shouldn't have. Let's say, I ask the AI to look evidence for a particular crime and the AI says, yeah, there's evidence for this crime and also, adjunct to it, that there's other crimes. Did I have search warrant authorization that there's other crimes? Um, did I? Did I have search warrant authorization to look for other crimes? And you?
Speaker 1:know the ai did it. It wasn't me, it was the ai. I didn't tell him to do that right guaranteed so yeah, so how do then?
Speaker 1:the question is, how do we make, put some safeguards that are, uh, in the code automated for the systems? How do we make some safeguards that are in the code automated for the systems? How do we validate in a way that's actually representative of how AI works, that we're not trying to copy a system that works for our forensic tools that are pretty static, that they take data, they just data structures, they go about and parse them out and then you make the interpretation when you put this filter, with the AI kind of giving you a few interpretations before you make the final decision. How do we validate and verify that data in a way that's scientific and presentable at court, right? You know we always laugh about the oh, look at the expert. He put it without checking. Ha ha ha.
Speaker 1:Let me tell you, I believe I might be wrong. I hope I'm wrong, but I believe there's a lot of issues that, even if you do your best attempt of using these tools, you might get burned because these tools haven't been, have not gone, I believe, enough to this process of test, say testing in the sense of going being used in the legal process, being used for casework, both civil and criminal. I don't think that tool's been applied enough in this fields of knowledge for us to kind of say we know what the outcome of this situation is going to be. I don't think we're there yet. What do you think?
Speaker 2:They haven't been around long enough to work out all the bugs, definitely. Yeah, we haven't even discovered half of the bugs that are going to be discovered in the future.
Speaker 1:So, yeah, yeah, let me put this comment from Brett right, so AI should be put through the scientific method. It's like asking a question of someone you don't know. You have to verify the answers independent of AI, and that's correct right Now. My expansion of that point is well, the thing is that the scientific method right, and it's about asking questions and expecting some answers and then extrapolating into the past and into the future because of the repeatability of the thing. Right, the scientific method only works because we know that things are stable. Right, gravity is not going to all of a sudden not work, right? We know? Yeah, you know what I mean. We know that electrons behave in a certain way and the protons and neutrons behave in a certain way. Therefore, I can make some predictions in how things came to be. From the nucleus of stars, you know, fusioning hydrogen. Does that make sense?
Speaker 1:But when you have a system, that at the heart there's randomness, and randomness in a way that we don't, with this our methods cannot quantify, right? Do we know what the error rate of our LLM is with certainty? I don't have the answer. Maybe there is. Again, I'm ignorant. So take this with a big grain of salt, not a rock of salt from me, right? This is me talking with a lot of ignorance. I don't know what the error rates are. How do you calculate them? What's the confidence bounds with LLM outputs in regards to what they're saying?
Speaker 2:Do you think maybe we should just ask Copilot or ChatGPT what their error rates are?
Speaker 1:I mean, maybe We'll see what they say. Well, actually, yeah, how do they?
Speaker 2:do it and send in your different answers that you get.
Speaker 1:But that's the point. Most likely we'll get different answers, right? Yeah, and that's why I'm saying the scientific method is true, we need to apply. So again, brett, it's correct. But the point I'm expanding on is the scientific method. It can't be, from my opinion at this point. It's a tentative opinion, okay, people Cannot be. I'm going to do it the same way I do my other tools, where I look at the inputs and the outputs and if they work, that's great, if they don't, I'm just going to chug it.
Speaker 1:Because what about discovery issues, right? What about impeaching the process when your only bias your bias is being reaffirmed by selected things that work and then totally dismissing the ones that don't? Doesn't that tell you something about the process? Of course it does, especially if you're on the other side, right? So that's why I see the scientific method should be applied.
Speaker 1:The question is, how does the scientific method look applied in these circumstances, with a system that has randomness built in it and repeatability is not guaranteed? Hallucinations are a big thing with this type of system, where they just make things up and even if you say, well, my LLM actually references the source to make sure it's correct, what tells you that the reference is correct or that the interpretation of that source data is correct. And then the last part. Well, you verify it, sure, but when you have 100,000 of them, are you going to verify them one by one? Am I gaining anything? Now by using AI For certain processes? We might not be gaining anything or any speed. We have to verify because we cannot really validate the process. So we have to verify everything right and we have to be careful with those terms verification and validation right. Is there a gain there? Some areas, I believe, do get a lot of value from AI. When you want to express an idea and or create. What is it that you created recently, heather, with ChatGPT? What was it?
Speaker 2:Oh, I actually I'll take things and summarize them. I'll put like something I want to summarize in like bullet points. I just did it for a PowerPoint presentation work and I'm like I know what I want to say and, honestly, sometimes it just makes what I've already created sound a little better. So I'll use it to make things sound better, yeah.
Speaker 1:I mean and I see value in that.
Speaker 1:Well, no, I mean, and there's value that, of course, don't come and face it, but you're going to sound like a robot, right, but there's some value in that. But for certain processes, how will the scientific method be shown to work in these systems? And I think everybody thinks, oh, check the output and that's it. I think the courts are not going to just be happy with oh, you checked it and that's fine. Yeah, okay, how do you check it? Where's the negative results? Like the judge said, explain to me how this works. Well, you know I'm going to give you some weak explanation If I'm on the other side, whatever that side is, I would like to really know more. And, heather, we're not talking about about cases, but we've seen cases when the expert on one side is just has cursory knowledge on the field oh, definitely and the other side goes at it and they go oh yeah, well, define this, define that, how does this it like?
Speaker 1:and they don't have answers. That's horrible. It looks horrible when you don't have those answers. Um, so yeah, I mean, there's a lot good, there's a lot of a lot of bad, right, and look, you know, jessica has some great comments. Ai tries to be a people pleaser, right yeah. And and doesn't provide citations. But even if they do, I'm going to question them. And again, do I get a time savings on that right? That's still an open question.
Speaker 2:And about Go ahead, go ahead. Go ahead when the people place their comment too. So I have asked ChatGPT things in the past and I know the answer's wrong. I can tell by reading it and I'll write no, that's not right and I'll like reformat my question. But when you write to it, no, that's not right, this is what I'm asking you. It actually does apologize to you for getting the answer wrong. So it's definitely a people pleaser.
Speaker 1:Um, just a little tidbit, yeah, no, and, and, and that, and. Again, that's also a bias, that's with the system, right, and uh, and. But we could. We've done other shows on a little bit, a little tiny bit, about bias, and I think we should do one in the near future again, how's that? Look for people as well.
Speaker 1:But um brett makes a good point. Right, ai output will never be repeatable. Of course not. You can get the same inputs or prompts and it will tell you different things or related, but never the same, right, or it's really rare. Right now he explains that, yeah, the answer can be put through a scientific method. Right, you can say, okay, this answer from the AI, is it true or not? Yes, you can do that, right, but then again, that's the whole point. Right, at what point am I having any gains here? What am I gaining, right, when I have to, like I don't want to. For example, if we're going to make it with tools.
Speaker 1:Right now, you validate your tool that it chats SMS, mms messages. Right, you validated that and you verify the outputs with known data. Now you put unknown data and it gives you the chats. What do I do? I hear the chats here they are the really important ones, maybe the two or three. I will go into the database myself, put an eyeball on it and go from there. Why? Because the tool was validated and I verify with known data. Right, right.
Speaker 1:Imagine if I had to verify every single line of that chat. Because I cannot trust the tool to have pulled the things properly, because I cannot verify it from the get-go. I cannot verify that process. The process is so obscure and random that I cannot have a scientific verification. Is it feasible? I believe it's not right, and that's the thing as we get more into these use cases for this type of tooling in digital forensics. Again, we're talking about digital forensics specific issues here. I don't know how it looks in other fields of knowledge, but do I want to verify? My opinion is that most people are not. They're going to think it's like using other tools where they parse the chats, and here they are. Then what I have? To go through each and every one of them and don't tell me that you trust that you trust it, because you can't and you're not going to go through the 500 chats that the generative AI supposedly found, because you have no proof from beforehand that is consistent. Hopefully that doesn't make sense to you.
Speaker 2:Am I am I talking out of hand. What do you think? It definitely makes sense. I agree with all the points you're making and actually, malik, uh, just put up a chat. The biggest question is, uh, the algorithms. Are the algorithms used by the tool compliant with recognized forensic standards and practices? And that kind of goes along with everything you were just saying. I agree.
Speaker 1:I mean, do they exist? Do they exist how?
Speaker 2:do we even measure that? I don't know.
Speaker 1:What are the standards and best practices for AI use in digital forensics? I don't know them. Maybe they exist. Again, I think I can speak for you on this. We can agree that we just don't have enough knowledge to make that statement that they don't exist, right, but I haven't seen them. And if they exist, they need to be popularized quick and if they haven't been made, they need to be made. Like, for example, jessica High talks about F1 scores. That's something that I told her she needs to teach me on, because that's all the ways of kind of measuring, that validation of tooling. In regards again, I don't have the knowledge, but in regards to probabilistic analysis and how you can have a confidence interval, that's good with dealing with data that's probabilistic in nature. I am ignorant on that. I'm really looking forward to having a conversation with Jessica and we actually, you know she's so busy and she's been so kind and kind of putting me in her calendar in the future, so I'm looking forward to that talk.
Speaker 2:Who conference me in her calendar in the future. So I'm looking forward to that talk. Who?
Speaker 1:conference me in on that. Well, I don't see why not. We'll ask Jessica in a second. But the point I'm saying with that is, yeah, there has to be something. Again, I don't believe we can take our regular way of validating and verifying things in DF and just move it over to AI. I think we're good because we might stub our toe without expecting that we AI. I think we're good because we might stop our toe without expecting that we might be caught by surprise, you know, and look, oh look. I'm sorry, but one more thing I'll let you talk.
Speaker 2:I was just going to put it up. That's what I was going to do, thank you.
Speaker 1:Well then, read it. Please read it, because Brett really, really kind of, he really summarized my point there. Can you read that from Brett? We are taking a technology, ai that is not developed for DFIR and forcing it in DFIR. Boom, and you know what?
Speaker 2:All the things I've been saying for the last 10 minutes. You just did it in one sentence you just did it in one sentence.
Speaker 1:That's exactly what it is. That's exactly what it is Definitely. And did we put the comment that the order of the judge? In this case, we're discussing what was his order Do?
Speaker 2:we have it. You know I don't, I don't have it, but I do. You know what it was.
Speaker 1:I think I have it. Let me see if I can. If I can, I can show it I have the court case up on the screen.
Speaker 2:Somebody was asking in the chat about putting that court case up. It's up on the screen. It'll also the show notes yeah.
Speaker 1:So actually, you know what I think I might be able to?
Speaker 1:uh, I'm gonna share my, share my screen, because I want, I want to read it, I want people to see, to see what his order was in regards to his conclusions, to, to this, to the use of this thing. So let's use. Here here we go, so share. So the judge said can you see that? Yeah, all right. The judge says is admitted in court, admitting that the court has no objective understanding as to how Copilot works. Schaaf suggested that the legal system could be disrupted if experts start overly relying on chatbots en masse. But what is the marketplace for these tools doing? What are they doing?
Speaker 2:Yeah, adding it.
Speaker 1:They're pushing it Like really really a lot. They are pushing it, pushing it hard yes, you're going to be using this. They're pushing it Really really a lot.
Speaker 2:They are pushing it hard. Yes, you're going to be using this. It's going to make your life so much easier. Everything's going to get done faster until you get to court and your whole case is thrown out because you relied on it too much.
Speaker 1:So I think and I guess it's my kind of finishing point on this on this field, for my part, is you have to be an expert. This guy is an expert, and, and and he did not use his expertise in a way that was satisfiable to court. And that's not me talking. That's what the judge said, okay, so don't get me wrong. I'm not talking for anybody here. That's what the judge said.
Speaker 1:Right now, I look at myself. I need to know how the devices that I analyze work, how the data structures are done, how they're parsed, and then maybe then I can look at chat, gpt, ai or LLMs as something that might help me in some way. I don't think we're really there yet to be having this widespread use. I don't think so. And again, for the legal issues in regards to discovery, in regards to validation, in regards to constantly having to verify every single thing that the LLM comes up with, and for now I'm staying away from it. I'm not saying that you should or should not. That's for everybody to decide on their own. What I will say is that we need to actually look into these issues, continuous conversations, be in touch with organizations like SWGD, right, s-g-w-d-e that they're building frameworks to look at these things. Follow folks like Jessica High, brett Shavers, that are really smart on these things. Don't jump like. Don't jump into the water with your eyes closed, right, it might be frozen, all right, I don't know. What do you think?
Speaker 2:No, definitely. I 100% agree with all of that. It's too soon to be overly reliant and I look forward to future research on maybe answering some of the questions that you just asked, because I agree with you when you say I just don't know, and I feel like I'm the same way. I just don't know everything there is to know about it and it really makes me, I guess, nervous to trust it using it in casework at all.
Speaker 1:So yeah, and I don't think I have the time to have to validate every single thing. I might as well just do it myself.
Speaker 2:Right, exactly, I'll just do it myself.
Speaker 1:But you know again, if you find value on it. I'm not saying you shouldn't use it. Just make sure you're aware of the limitations and that you're mitigating those limitations and you're complying with your discovery issues. And let your prosecutors or your lawyers that you're working for you're in the civil sector let them know what you're doing, because the time savings you might get from that use might cost you your reputation as an expert. And in this field your reputation is everything. There's nothing else. There's nothing else Can you be trusted to? Like Brett said somewhere else in the chat, the AI doesn't do this. And swerve to the contents of the report. The AI doesn't testify. You are the ones throwing to it, you're the one testifying. And if you stub your toe with AI, your reputation is the one that's going to be dead. And if your reputation is dead in this field, guess what? You're out of the field.
Speaker 2:You may not be working cases at all anymore, at all. Period be working cases at all. Anymore at all. Yeah, period, yeah, definitely so, wow, craziness, all right. So yeah, we'll have more talks on ai in the future.
Speaker 1:I am 100 sure of it oh I, I enjoyed this, this little second minute.
Speaker 2:It was my like, like it was great well, now I'm going to shift gears because I want to show you guys, guys, a new tool that was released this week. It's called iCatch and it's created by Aaron Wilmarth. He had a need to create a tool that works well with the iOS Cache SQLite database. So the Cache SQLite database is like the main storage for the Apple locations on an iOS device. He didn't like the way any of the tools were displaying it. He didn't like how it was, I guess, being displayed in Google Earth from the exports from tools.
Speaker 2:So he started to do research on creating KMLs, pieced together some scripts that he had used and the new stuff that he was learning on creating KMLs, and he made a tool and he released it to the public, I think just a couple of days ago. In my office we have been looking for something like this, so I was super excited to try it out. But what iCatch does is it stands for iOS Cache Analysis, for Tracking, coordinates, history. It's a utility that processes the iOS Cache SQLite database and creates a timelined KML map for use in Google Earth. So I'm actually going to show this.
Speaker 1:And just a quick side note here If you're not familiar with the Cache SQLite, that's one of the premier databases in iOS devices in regard to geolocations. It's pretty, really accurate. It has a whole bunch of good data there and if you're working on a phone, an iOS device and you haven't looked at that database, you're missing out. You have to process those.
Speaker 2:Yes, so I have up on the screen the um, the GUI, the interface. Um, it's uh, you just have to do one line of script to install requirements to run this, but there's an executable already compiled for you. Um, so that's what I'm showing on the screen. On the screen You'll put in your case uh details. So I just kind of pre-filled this out. I have a New York Police. I did Examiner Heather, the case number 12345, device info I put iPhone 7.
Speaker 2:Then what the tool is looking for is it wants the database path. So you'll be exporting your cachesqlite database out of your extracted data and pointing this tool directly at the cache SQLite and then you just choose an output location for the different file formats that it creates, which it will generate a log file based off of the tool processing. It generates a CSV file with the information contained from the cache SQLite and the KMZ file for ingestion into Google Earth. On the interface you can choose which icon color you want. There's red, green, blue, yellow and purple currently. I'm just going to leave it red as the default. And then there's a date time filter. With the Cache SQLite, if you're not familiar with it, it stores thousands, tens of thousands of data points and they're very rapid fire. I'm going to limit this just to one hour, and it still is a ton of data points. If you try and point this at the entire cache SQLite and create your KMZ, you're going to crash Google Earth, so just.
Speaker 1:FYI yeah, unless you have some industry, like you know, strong mapping application like that. But yeah, it's not going to work. You have to limit those timestamps for sure.
Speaker 2:So I used Josh Hickman's iOS 17 image. Thank you, josh Love, that that was available to use and I just narrowed it to a date and time that I knew he had location data for. So July 24th, from 11 am to 12. Then you just click generate outputs. Once you click generate outputs, as soon as it is done, a box pops up and says you can't see it. But a box pops up and says CSV, kmz and log generated successfully. Do you want to open the directory? So I'm going to open the directory and let me share that screen. So what you get, you see here in my directory the log file related to the process, the CSV, that contains like the timestamp, the latitude, the longitude, the accuracy, and then you have the KMZ file that is ready to go in Google Earth and I actually preloaded it. So I won't take too much time to share the screen here. Let's see.
Speaker 1:We're going to overtime, but it's totally worth it, so stay with us.
Speaker 2:So this is what it looks like, mapped out. He has it set up so that each data point is identified by the record ID. Sorry, I couldn't think of it the record ID in the database. And here you see where I'm assuming Josh traveled between that one hour on July 24th. Let me just zoom in here. Ah, it worked Good. Oh, it didn't work, all right. Well, I'm just going to tell you, because over on the left-hand side you'll see the actual record ID and then above each record ID there's another box that can be checked, called accuracy for record, and then whatever number it is Aaron has in the KMZ file, if you check those accuracy check boxes, it'll actually show you the circle of accuracy around each point. So I actually want to show that I'm going to uncheck and just check all so that we have those accuracies, and then we'll just pick a random one and we'll zoom in on it. I'm actually going to stop this share and share it with the other option so you can see the writing as well.
Speaker 1:Yeah, please, yeah.
Speaker 2:I just chose window instead of entire screen. There we go. So on this particular point, if we continue to scroll in, you can see that GPS data point and in the box there's information about that record ID. So it's got my information that I put in about my case, the timestamp, which is in UTC, by the way, so that's something to take note of. And then latitude, longitude and accuracy. For this particular point the accuracy is four meters, so you can see that accuracy circle around the point. I'm just going to zoom in a little more here on the road. So I think this tool is pretty awesome.
Speaker 1:Yeah, and for folks that are not familiar with this type of analysis, that accuracy, that circle tells you that that device or whatever it is, was somewhere inside that area. Right, the larger the circle, the less accuracy you have, because now that device could have been somewhere in a big circle. But the smaller the circle, the better it is for you to say look, we're pretty confident it was here and not there.
Speaker 2:Yes, yes. One last thing I want to show. I'm going to zoom back out, so let's get zoomed way out here, also built in, and I don't know if it's going to go backwards or forwards for the first time, but we'll try it here.
Speaker 1:Let's do it.
Speaker 2:Is the doing it right now. There we go All right, If I hit play and I think I have this set to be on a loop it's going backwards or following the path that Josh took with his test phone on that day.
Speaker 1:Yeah, and for folks that are just listening, what Heather did? She took Google Earth and used the functionality to kind of play those points and now she's doing it kind of showing all right, the phone moved and you can set the speed, how fast you want it or not, but it's putting the dots on the map. This is good, because now you've got directionality right, you have a blob of dots everywhere, all right. So what went first, what went second? Right? Do I want to go and look at the timestamp one by one? No, let's just hit play and the dots appear in the order they were recorded on the device as the device was moving along the surface of the earth. So that's awesome, and I think Aaron is in, is in, is watching.
Speaker 2:Yeah, I see him.
Speaker 1:Yeah, and he said that. You know, he really hopes that it helps some people out in their exams and I believe I believe it will already focus on the chat, saying that he's going to use them at their work, they're going to use it for their master's thesis, like that's immediately their folks in the chat right now finding value. So, aaron, we appreciate you and, for folks that are listening, don't be afraid of sharing what you know and what you have right. Do we know about iCache I'm sorry, the Cache SQL databases? Sure, have we mapped it before? Sure, but this particular implementation and how he did it and the accessibility, we still need it, right, and just because somebody knew about us and me, oh well, I'm not going to do anything about it. There is value on your perspective, even if it's a topic that's known. Does that make sense? Heather? It does so. Please, folks, if you're listening, you have. Well, I had this idea, but I've seen it done before in a different way. It don't matter, put it out there.
Speaker 2:It's going to help, definitely. I think too, with this tool and I I chatted with aaron a little bit about it, but future considerations I hope that he'll implement um support for other databases right. So this is the cache. This is great for the ios, it's like the main database, but I think um life 360 could uh benefit from this with the Life360 locations in Android or iOS, and I mean there's a ton of other databases that record location data that this tool really could work well for.
Speaker 1:Oh, absolutely. And how he leverages, how Google parses KMC right, yeah, yeah, the KMC is fantastic. Look at that. The accuracy and all the be able to navigate to the points A really really good job, so well done.
Speaker 2:Yeah, it's awesome and I'm already using it at work. I love it. I was looking for this.
Speaker 1:There were a lot of great comments in the chat and I really apologize for the folks that we cannot make. Make them all right now. Put them in the chat because we run out of time. But if you're listening afterwards or watching afterwards, there's a great value in being here live, because they interact with really smart people here in the chat and you're going to learn a lot, even more than than what we could try to impart or share with you. The folks in the chat are great. So again, thanks for kevin and jessica and and brett and and all the other folks in in the chat that are asking questions, and we apologize if we can get to them because time kind of ran out, I know it's my fault.
Speaker 2:I talk a lot. Several of our topics are now going to be pushed to the next podcast, because it has been an hour and seven minutes already. Yeah, but it was a great hour and seven minutes.
Speaker 1:I believe so, I believe so, I believe, yeah me too, me too.
Speaker 2:Um, I am gonna do the meme of the week even though we've gone over um and I actually have two. I have two because it is halloween time, um, and so we have to do both of the halloween right, so that the next um podcast is supposed to fall on halloween. But alex has kids, so we can be doing that. You have to go trick-or-treating.
Speaker 1:Yeah, so we'll do the.
Speaker 2:Halloween memes now.
Speaker 1:Yeah, either my kids will kill me or the wife will divorce me, so I need to do trick-or-treating with the family.
Speaker 2:I think both would probably happen yeah.
Speaker 1:I want a little bit of that candy. You know the parent tax. There's a tax in my house from the parents, A percentage has to come to me as the parent.
Speaker 2:So go ahead and explain our memes.
Speaker 1:So you got two folks dressed I say folks, but two kids dressed as ghosts. One is a better well-dressed ghost than the other one, but you have to watch it. The point is that one gets candies in the bag and the other one gets a rock right, and I think a lot of us can relate to that rock right, the the. The text says my friend from another agency got a talino box, a mac lab bag, book, laptop, tons of removal media I can't spell, but tons of removal media and me, I got a rock, I I got nothing. And at some point, you know, we always, you know kind of hunting for parts to make things work when they break. But you know, that's part. The main thing is the mission and we try to do, you know, do the right thing with the tools that we have and we will make it happen.
Speaker 1:Right, and actually I have to go back for a second and there's a comment I have to share from from brett and um. He, he was saying this is one last little thing because I liked it a lot. Um, let's see if I can find it. He was saying that injustice overrides any person's reputation. Right, and at the end of the day, reputation is reputation is. Reputation is important. But but if your work and your carelessness is an injustice committed on somebody or somehow, then that's way worse than what I think, my think about you. Our work is a work of truth, right, and the truth goes before any of our reputations. So that's a great point. So, yeah, so that's the first beam going back.
Speaker 1:Yeah, sometimes don't get what we need, but it's okay, we'll make it happen, no matter what the mission, the mission will get accomplished, whatever it takes.
Speaker 2:And then we couldn't have a Halloween go by without sharing the law enforcement digital forensics examiner Halloween costume. This is a classic.
Speaker 1:We will be showing this meme for the next 90 years or until we turn 90, because we won't be here in 80 years by the time we turn 90. So we have a person here, a guy, right, dressed with boots, 5'11" khaki pants, you know tactical pants, like the tactical tactical belt, the tactical polo or the tactical shirt. If it's a shirt, the sleeves have to be rolled up, of course, with a tactical shunto or you know g-shock type of watch with the ball cap, all right, and that's the classic guitar forensic summer uniform in the whole planet. Okay, right, so your costume comes with that outfit, right, your 511 pants for both office and lab work. Right, because even we have a meeting with management, everybody will be in a suit except us. We'll still be in our, in our tactical pants and our polo shirt. That's just how the world works.
Speaker 1:Okay, we're gonna have pants that might have no ink. We will have a right blocker kit that we carry around that. We haven't updated the firmware since 2015 because nobody updates the firmware which test this, this. We need to update our firmware, okay, yeah, and and you know, this is really relatable, I think, because I can go into a place and I have a good feeling of who the examiners are, based on how they're dressed or how they behave or the different equipment they have near them. Right, definitely, even if you go to private sector, you're like oh, I don't do that anymore, but you did, I know you had it I know you're dressed like this.
Speaker 1:Don't deny it.
Speaker 2:I think it probably takes a little while to break that habit once you move from public to private sector as well.
Speaker 1:I don't know yet. I wouldn't know. Yeah, I wouldn't know yet. I do wear a lot of polo shirts, but they're all from like oh yeah, I mean I hope. I'm wearing polo shirts from a bunch of conferences because you know they're free and they're great, Right, so I wear those in the chat about the link for iCatch.
Speaker 2:I'm going to throw it up here real quick, so it's there while we say our goodbyes, but it'll also be in the show notes, and I shared it on LinkedIn. I think, alex, you did too.
Speaker 1:Yes, I shared it on the chat. The chat doesn't go out to LinkedIn. So, if you're on LinkedIn, you might not see that in the chat, so look at it on the screen, or you can go look at the show notes afterwards in YouTube or in whatever podcast directory of your preference, and you can get all the links for the show in one of those?
Speaker 2:Yes, okay.
Speaker 1:Yeah, I mean Jeremy's saying that we can make the show longer, but my kids are calling me. That's why we cannot make it that long.
Speaker 2:Oh yeah, no, there goes, the kids mad at you and divorced again.
Speaker 1:No, I mean, look, we do the show first of all because we like it, but also because we also appreciate the community that's been built around it. All you guys and gals, comments and insights, we really appreciate them. I do believe the guy in the fellow pants gets things done.
Speaker 1:So, yes, Matthew, that's actually correct and again, we appreciate you. So again, we're not going to have a show for Halloween because we're going to be doing trick-or-treating and do other things, but then after that I'm going to come back and see what happened the last two or three weeks after that I'll have some zoo pictures, some wildlife park pictures. Oh, I'm looking forward to the, the swimming penguins, all the good stuff yeah, me too all right, that's all I got. Anything else you got for the good order heather I have nothing else.
Speaker 2:Thank you so much well, thank you everybody.
Speaker 1:Uh, stay safe. We'll see you soon and have a good uh, a good afternoon, good night or good morning if you're in australia. Bye, bye, bye, thank you.