
Digital Forensics Now
A podcast by digital forensics examiners for digital forensics examiners. Hear about the latest news in digital forensics and learn from researcher interviews with field memes sprinkled in.
Digital Forensics Now
The "Bear" Essentials of Digital Forensics 🐻
The digital forensics world isn’t slowing down — and neither are we. In this episode, we celebrate Heather’s well-deserved recognition as Cellebrite’s Mentor of the Year 2025. Naturally, there were a few speech mishaps and, somehow, a bear raiding Heather’s bird feeder (yes, actual wildlife). But between the chaos, we get serious about the fast-changing landscape of digital evidence collection.
We dig into Amazon’s decision to remove the "do not send voice recordings" setting from Echo devices — meaning all voice requests now head straight to the cloud for AI training. It’s part of a growing industry trend, raising huge privacy red flags. We also unpack a study showing AI search engines misattribute sources at rates over 60%, and discuss how leaning too hard on generative AI risks dulling the critical thinking that digital forensics demands.
On the technical front, Christian Peter reveals that some forensic tools alter or delete unified logs during extraction — a serious concern for evidence integrity that can compromise investigations before they even begin. We also walk through a deep dive into Snapchat artifacts, showing how to connect media files to user actions and locations by following database breadcrumbs that automated tools tend to overlook.
Through it all, one theme stays clear: while technology keeps racing ahead, the responsibility for getting it right stays firmly with the examiner. As one guest bluntly put it, "We might be the last generation of cognitive thinkers."
Tune in for a sharp, insightful, and slightly unpredictable conversation at the intersection of bears, bytes, and the future of digital evidence.
Notes:
Mobile Forensics Are you nerd enough?
https://www.msab.com/events-webinars/webinar-are-you-nerd-enough/
New Podcasts!
https://osintcocktail.com/
https://www.youtube.com/@hexordia
Amazon "Do Not Send Voice Recordings" Privacy Feature
https://www.usatoday.com/story/tech/2025/03/17/amazon-echo-alexa-reporting-privacy/82503576007/https://www.thesun.co.uk/tech/33907850/amazon-alexa-echo-do-not-send-voice-recordings
AI search engines cite incorrect news sources at an alarming 60% rate, study says
https://arstechnica.com/ai/2025/03/ai-search-engines-give-incorrect-answers-at-an-alarming-60-rate-study-says/
The Slow Collapse of Critical Thinking in OSINT due to AI
https://www.dutchosintguy.com/post/the-slow-collapse-of-critical-thinking-in-osint-due-to-ai
NIST
https://www.nist.gov/news-events/news/2025/01/updated-guidelines-managing-misuse-risk-dual-use-foundation-models
Don't lose your logbook
https://www.linkedin.com/pulse/dont-lose-your-logbook-christian-peter-ebcje
Not All Encryption is created equal
https://www.s-rminform.com/latest-thinking/cracking-the-vault-exposing-the-weaknesses-of-encrypted-apps
Welcome to the Digital Forensics Now podcast. Today is Thursday, april 10th, the year 2025. And my name is Alexis Brignone, aka Briggs, and I'm accompanied by my co-host, the Celebrite C2C Mentor of the Year 2025 people. Mentor of the year 2025 people. The Piper of Albany, but with bears instead of mice. The anti-lodging long distance driver. The unique, the only one. The mold has been broken after she was made. Heather Charpentier music is hired up by shane iris and can be found at silvermansoundcom. People. Hello everybody. I couldn't find the play button for the music, but here we are oh my god, I think that's the longest intro you've ever given me look, it's a lot of things being happening.
Speaker 1:I know people are dying to know all about all these things, so we can we can tackle them in the order. Look, look people. Heather was the main character of the last like two weeks.
Speaker 2:Like she was the main character.
Speaker 1:energy. The main, the main storyline was going through her. So if you're attentive to the show of life, she was the main character for the last two weeks, so tell us.
Speaker 2:I don't know, I don't know about all that.
Speaker 1:It's true. It's true, people. It's true. What do you want to start with, sir?
Speaker 2:It's been a busy few weeks for sure, thank you. I see everybody writing congrats in the comments. Thank you very much. Well deserved, extremely. Hats in the in the comments.
Speaker 1:thank you very much exactly. Well, well deserved, extremely well deserved. Yes, the pied piper of bears, huh, yes, yes and people are like what?
Speaker 2:I may want to tell the story behind the pied piper of the bears, I think yeah, yeah, yeah, well, you tell it, you tell what happened with bears what happened so I I think everybody knows from a previous show I love birds and I have the bird buddy bird camera and so I feed the birds and I get them on the video camera that's built into the bird feeder. And there comes a time every year in New York that you have to take your bird feeder down because certain animals come out of hibernation.
Speaker 1:Yeah Well other than people.
Speaker 2:Let me just share what I got in my bird cam. Hopefully everybody can see it. Yeah, if you can't. If you can't write in my chats, oh my gosh, um, but yeah, just listening.
Speaker 1:That's not a dog making a sound, okay no, um.
Speaker 2:So the bear decided to come out of hibernation and just hop right onto my bird feeder for a little snack. So, bird feeder is now down and in my house it will go back out when the hummingbirds arrive.
Speaker 1:Do one of the birds come in and eat from inside the house? Yeah, no.
Speaker 2:No out. When the hummingbirds arrive, you want to do whatever's. Come in, come in and eat from inside the house. Oh yeah, no, no, I'm just gonna. I'm just gonna hold off on the birds for a little while oh, my goodness, that's so scary.
Speaker 1:Let me send a quick shout out to duck squash is first time he catches the show. He or she catches the show, so good to have you. Um, that's that's crazy, like scary. That video is scary. And where's that? Where theater, like in a in a pole, you know wall where you have it from not a pole right out, right outside my window your window.
Speaker 2:Okay, this is yeah, don't poke your head out no, I didn't want to go outside in the morning. I think you messaged me. Hey, are you headed to the gym? Hell, no, I am not leaving my house. Um, I stayed at the house and did my thing oh my god, yeah well yeah yeah, lessons, lessons learned um, yeah, definitely
Speaker 1:no, no, no. Now that you learned your lesson with the bear, now I'm gonna introduce to everybody oh right, I already said it the mentor of the year, and it was. It was fantastic. Look we, uh, we're, we're invited, was invited. I was nominated for something I didn't win, which is great because I could really focus my energies on celebrating Heather. It was so much fun. We ate, we got dressed up. She sent us some pictures of that in a second, but I want to play to you against her will, totally against her will.
Speaker 2:Completely.
Speaker 1:Her win. So everybody just enjoy this with me please, as we comment, live on it. Hopefully everybody can see this. There's Heather Barnhart from Southern Bright listen to her speech.
Speaker 3:Being a mentor can be exhausting, I would not be here today my first boss did not force me to not push buttons and get answers. I have several mentors, some of you are in this room and you may not even realize it, mo, you mentor me and you may not realize it, I have mentees in this room. That I may not realize it. It's all about giving back. Our jobs are hard, our jobs are exhausting, no one has time to mentor.
Speaker 3:that's what we all say when someone's like hey, can you introduce me? Yeah, and then you end up keeping communications with that person. It is our time. You can see that you raise the next generation.
Speaker 1:We're not all going to just like get the camera out of my face it is ongoing work that you are going to do forever.
Speaker 3:This is why we created the 101. The Celebrate community is a place for you and the younger generation to meet people, to groom them into being amazing examiners and investigators.
Speaker 1:So this person has gone above and beyond and has done more than you can imagine.
Speaker 3:It is my honor.
Speaker 1:She said that I knew it was just one at that moment.
Speaker 2:Stop it. You did not.
Speaker 3:Back in the day. You know who you are, you use these.
Speaker 1:They had inside little fire day bags. They were bigger back then, All right From one Heather to another. Oh my God.
Speaker 3:All right From one Heather to another Woo.
Speaker 2:Oh, my God.
Speaker 3:Yeah.
Speaker 1:On fire. No, this isn't the best part now.
Speaker 2:Nobody was giving speeches at this point, yeah.
Speaker 1:I know.
Speaker 2:Nobody was giving speeches, speech point. Yeah, I know, nobody was giving speeches.
Speaker 3:Speech, speech, speech, speech, speech, speech, speech, speech, speech, speech. Oh, I could have killed them. I'm a co-worker right here. I work with Heather guys.
Speaker 2:But thank you, thank you so much I appreciate it. I think I learn more every day mentoring people than I'm giving back, so thank you.
Speaker 1:And then Heather went with the plate and hit us over the head with it. Oh, I was a nice.
Speaker 2:I hate you at the end, yeah. Let me tell you that I hate you.
Speaker 1:Oh, you did, you did. You said it a few more times after that, but that made it worthwhile. That I hate you was like the most valuable thing I got out of the whole experience.
Speaker 2:It was great, oh my gosh. So yeah, this will be our last podcast, because Alex and I are no longer friends after he put me on the spot to do a speech when nobody else was doing speeches.
Speaker 1:Well, the thing is that she's contractually obligated, so sorry, you know the lawyers involved. Some other podcast trying to steal her Not happening. You know who you are Not happening.
Speaker 2:So that was great. Thank you for sharing that and embarrassing me. However, I want to show some pictures from the conference because it was a great time we went. We hung out with old friends, met some new friends that we had never met before.
Speaker 1:And look how sharp I mean Heather's just killing it.
Speaker 2:So this is my coworker, Kevin Lyback, who nominated me for the mentor of the year and decided to join with Alex and Scream Speech while I was on the stage.
Speaker 1:Look, the first time I met him that day and now he's my brother, Like like we're like he's like the best, Definitely the best yeah.
Speaker 2:The one that I called a jerk as I walked up on the stage. Everybody knows Ronan. He's the life of the party at Celebrite for sure At every party. Yeah, at every party. You're right, the Binary Hick Josh.
Speaker 3:Hickman the man Got to hang out with Josh. Yeah.
Speaker 2:JP from France, who works for Celebrite and does some really good research. If you haven't checked out his blogs or webinars, check them out. It's awesome.
Speaker 1:Some jerk there on your other side Some jerk.
Speaker 2:Yeah, I'm not even going to acknowledge the other guy in the picture.
Speaker 1:Keep going, keep going.
Speaker 2:This picture is actually three heathers. We had three heathers in the area at the same exact time.
Speaker 1:Heather overload.
Speaker 2:Yes, it was Yep area at the same exact time, heather overload yes, it was yep. Um, besides getting to hang out with old friends, meet new friends, um, we got to listen to quite a few good, uh good talks, and this is one of the one of the ones that stood out to me the most, actually, was ian whiffin and heather uh mahalik barnhart. They were giving a speech and speech about verifying validation, and I thought it was a really good speech.
Speaker 1:Yeah, and Ian, and both of them, I mean they're fantastic. Yeah great Ian is the best, and Heather too.
Speaker 2:And then, of course, we got to see Geraldine Bly and Dan.
Speaker 1:Dan Ogden yeah.
Speaker 2:Yep and K-9 Siri. You can just see the top of her head there, but she was up on the stage too. They gave a presentation about a case that they worked on. It was a really great presentation.
Speaker 1:And they're awesome. I've known them for many years, especially Dan. I've known Dan from way back when, and what a great examiners and investigators Really good people.
Speaker 2:And then we have Alexis, myself, myself and Bill, the phone wizard himself.
Speaker 1:We were all dressed up and ready to go to the gala. Bill is so much fun to be around.
Speaker 2:Yeah, oh, definitely. He gave a great presentation too. If anybody ever gets the chance to catch his presentation on expert witness testimony, it's like top-notch, gives you a great pointers and just a really good idea of how to testify as an expert witness. We got to meet the one and the only, lionel Natari from Switzerland, who everybody knows for his blogs on the Unified Logs. Great, great guy. What a down to earth fun just great guy.
Speaker 1:Oh and and that dude, he's like the most sharply well-dressed guy. It don't matter what time of day, what time of night, I mean what day, weekends the guy look always sharp. I'm like dude, I'm going to, I'm going to aspire to look like you one day.
Speaker 2:That guy was sharp, yeah, plus he's super super smart. Oh my God, yeah, definitely. Plus, he's super smart. Oh my god, yeah, definitely. We were causing trouble, so we got locked up Alexis and the two Heathers in a jail cell.
Speaker 1:He gave us some time out.
Speaker 2:Yeah, and then here's a better shot of K-9 Siri and Geraldine and Dan and Alexis and myself at the gala. The mentor of the year picture with Heather and I.
Speaker 1:Oh, that's beautiful. I love that picture. Look at those there.
Speaker 2:That's a good one. And then the keynote speaker was Tim Tebow, and they had all of the people that won an award at the digital justice awards, which they're calling the justice. We had all got to go back and actually meet Tim Tebow. So that was really really cool and of course, we snuck my my nominator, Kevin Lai, back back to see Tim Tebow because he he really wanted to meet him as well.
Speaker 1:So he came back and Heather was excited to meet a guy that plays sports ball or whatever that's called.
Speaker 2:Yeah, no, he gave a really good speech. I was excited to meet him because of that Sports ball eh.
Speaker 1:You know nothing about it, yeah.
Speaker 2:So that's just some of the highlights from the conference. It was a really good conference. It was held in DC last week and it was a really good conference.
Speaker 1:I agree. I think hopefully for next year, you know, if folks here get a chance to attend, please do. It's a great event. It was the inaugural event. I think it was really well run, really well done, and I've gone to dozens and dozens of events like this, so I think it was really well done. And thank you, Sellerby, for having us me and Heather there and be part of that event and hopefully we can do that again in the future.
Speaker 2:Yeah, and they did announce too there's an early registration for next year. It's at a greatly discounted price, so if anybody is looking to register for next year, sign up.
Speaker 1:Do it, let's just do it.
Speaker 3:Uh-huh.
Speaker 1:Well, I think we run out of. I love me so we love ourselves.
Speaker 2:Yeah, I think we run out of. I love me, so we love ourselves. Yeah, let's do that. That's enough of that.
Speaker 1:Let's do something productive now.
Speaker 2:Well, actually the next one is kind of a little bit about me, Sorry.
Speaker 1:Oh, we've got some left and Heather loves Heather, you know a little bit left over. Let's get it done yeah sorry.
Speaker 2:So we were going to have a podcast prior to this podcast that I'm going to talk about coming out, but I got busy with some work stuff and we had to postpone until this week. So there was a podcast called Mobile Forensics Are you Nerd Enough? And it was about extracting ram from phones put on by cyber social and um, hosted by adam firman from msab. It was adam firman, dave louder from msab, wendy wong from msab and myself and we talked about extracting the ram data, what you can get from it, the the different techniques. They talked quite a bit about their RAMalyzer and the techniques that they use for the RAM extractions. It aired on March 26th but it's still available if anybody wants to check it out and view it, and I'll put the link in the show notes afterwards.
Speaker 1:No, it was a good show. I met Dave and Wendy in Sweden maybe a month ago, a month and a half ago Really really sharp people. I mean they're developing that capability and watch the podcast. It's really a capability. You need to consider having your toolbox being able to extract RAM from Android devices and you'll be surprised. We talk about it before the show but it's worth saying again you'll be surprised how much stuff you can get out of it. When you think that you have all that you have, you will be surprised.
Speaker 2:Yeah, definitely so. This one's not about me.
Speaker 1:It's close enough. It's tangentially about you, my agency.
Speaker 2:I did want to highlight my agency, the New York State Police. They were honored with a special new award at the Magnet User Summit in Nashville. It's called the Magnet Agency Impact Award and they provided it honoring commitment to protecting communities and advancing digital forensics. I'm just going to throw a little picture up there. But they were, we were awarded. I wasn't there. That's my lieutenant right there in the middle holding the award and then some other New York State Police Computer Crime Unit employees and some magnet employees there presenting the award.
Speaker 1:You can see Jad there. Or if you're looking as you're looking to the screen to your can, you can see jad there. Or if you're looking as you're looking to the screen to your left, that's jad there. I forgot the name of the uh great key guy to the left of him, so you got some uh pretty famous people there, plus the nice people from your state police just uh, killing it, delivering justice all around the state, the great state of new york. So congrats to all of you.
Speaker 2:Thank you, and Matt, did you see that award? You could really hurt somebody with that award. It's really like pointed and sharp.
Speaker 1:Well, that's why you give it to law enforcement people. We deal with weapons daily.
Speaker 3:We know how to handle it.
Speaker 2:So we did want to talk about a couple of new podcasts that are out that everybody should check out, um. So the first one is called OSINT cocktail and it is a podcast that delves into the world of open source intelligence and digital investigations. Um put on by investigators Kirby Plessis, kelly Paxson, amber Schroeder and Cynthia Navarro, and they're talking about pop culture contact and giving some investigative feedback on some really actually pretty well-known cases and series on. There's one on Netflix, but series on some shows, yeah, I mean anybody knows. Amber. Ceo and founder of Paraben.
Speaker 1:There's one on Netflix, but series on some shows, yeah, I mean, everybody knows Amber, ceo and founder of Paraben. She's been around an expert for a long time, so we all love her and it's a great concept. I wish I had thought of it first, right, because they go into this show, right, and then they have the show and they discuss about the forensics as portrayed on the show, right. Sometimes it's ridiculous, sometimes there's something to comment on and then build on. So it's a great idea, great show. So go, uh, go check, go, check it out um.
Speaker 2:The next one is haxordia has a podcast now um called truth and data? Um. In the first episode they talk about timely data preservation on mobile devices. Actually, let me back up for a minute. It's Jessica Hyde, debbie Garner and Kim Bradley that are putting on this podcast.
Speaker 1:All stars. All these people are known to you, and if they're not known to you, you need to go find out who they are. I mean all stars. It's also a great show.
Speaker 2:Yeah, so they have put out their first episode and I'm going to actually watch it tonight when we're done, so don't ruin it for me, I haven't watched yet.
Speaker 1:I'm not going to spoil it, don't worry.
Speaker 2:Thank you. I think I know what the theme is going to be here, though, but yeah, they talk about the timely preservation of data in mobile devices Such an important topic. If you're not getting those devices extracted immediately, you're losing data every minute. So great topic.
Speaker 1:No, I'm not going to spoil it, but a good discussion, policy-wise and technical as well, in regards to what does it mean to extract data vis-a-vis the preservation concept. So again, go watch, it's really good.
Speaker 2:Yeah.
Speaker 1:Oh, and Jess is in the chat, so thank you for being here. As always, she's saying that every episode is a new conversation, a topic, really like focus on a particular topic. Like us, we kind of freewheel on what's happening all around, but she and the folks in her podcast narrowed down on a topic and run it to ground really well every week. So go check it out perfect.
Speaker 2:Um, let's see what else we've got here. So this is another topic we were going to talk about last podcast. It hadn't happened yet, but it has happened now. So, um, amazon was set to remove the do not send voice recording setting from its Echo devices, and it did actually happen on March 28th. So this change means that all voice requests will be sent to Amazon's cloud, even if users have previously opted out. So if you have Echo devices, your Alexa device hey, alexa, your voice recordings are now being sent to the cloud. Mine's unplugged now. I think I'm done with it.
Speaker 1:Yeah yeah, yeah, I know, I know.
Speaker 2:Okay, yours is unplugged too. She talked in the middle of the night when I didn't prompt her anyway, which freaked me out.
Speaker 1:Maybe it was sleep talking, you know.
Speaker 2:Yeah, yeah, so she's gone now, but the feature will be replaced with don't save recordings, which still prevents recordings from being stored, but does not stop them from being processed in Amazon's cloud.
Speaker 1:Yeah, what the heck is that supposed to mean?
Speaker 2:They say they're not saving them. Amazon's cloud, yeah, I mean, what the heck is that supposed to mean? They say they're not saving them.
Speaker 1:You're processed yeah.
Speaker 2:So they're going to process them, and are they immediately purged, or I'm confused?
Speaker 1:Look, if you process something, there has to be some output that goes somewhere.
Speaker 2:It's there, yep.
Speaker 1:Yeah, this is the thing, right, can you hear me? Yeah?
Speaker 2:Yes.
Speaker 1:Yeah, can you hear me? Yeah, yes, yeah. So this is the company that also kind of owns the ring camera system and all that and if I'm not mistaken, they got caught employees kind of watching into the blink ones to kind of watching cameras and footage and stuff like that. So I did the thing. I mean, I'm not saying I'm not accusing them of anything. The point I'm making is is that I apologize, yeah, a little cuff there. Um, what I'm saying is that as consumers, need to be smart and the best predictor of future behavior is past behavior and how much we value our privacy versus the convenience that gives. I think I don't know if you mentioned it, maybe I didn't hear it the reason they're doing or grabbing doses. For what again do you?
Speaker 2:you said it oh yeah, ai, I didn't say it. Yeah, no, no, okay, yeah, you got it. You're saying it right now yeah, there, there we go.
Speaker 1:So yeah, to train their AI. Like I mean, you all know our opinions on AI in general and, like Jess is saying in the chat, the data work persists. Right, if you share it, it goes somewhere, and I agree with her 100%. So you got to make an informed decision if the convenience is worth your privacy, if the convenience is worth your privacy, and you know your thoughts on information being now part of this multi-global conglomerate. So just food for thought.
Speaker 2:Yeah. They do state, though, that they're not going to process visual ID, so facial recognition in the cloud, so I'm sure that's coming, though, if they're using it to train their AI and stuff.
Speaker 1:Now look, ai is such a. You know what. We're going to talk about AI a little bit later. But yeah, now it's just grab data for AI, grab data, grab data, grab data. Yeah, I'll say it real quick. There's a push now for these companies. They want to make training their companies, they want to make training their AIs, they want to train it with any piece of data, any book, any media, anything and consider that fair use.
Speaker 1:And if you're not familiar, fair use means that you can take any copyrighted publication or media and you can make small excerpts for certain purposes right, without breaking the copyright or the law for certain purposes right, without breaking the copyright or the law. For example, let's say there's an article or a book and I'm writing an article about the book. I can take a little paragraph, not even paragraph, a few sentences to make reference in my article about the book. I can't take the whole book or whole pages. Now I'm breaking the copyright. So I'm doing fair use of that. It's a tiny bit for another purpose and there's more legality behind it. I'm not a lawyer, I just did a holiday in last night old joke from the 90s, I know, um, so that's okay. But really take everything like, let's say, all this material that you worked on and now you're gonna give it just to these people because they want to train their ai. I mean again, those are things that are being discussed in Congress and the courts and we'll see how that shakes out.
Speaker 2:Yeah, Speaking of AI, so AI search engines. There was a study done. It was covered by Ars Technica and conducted by the Columbia Journalism Review Toe Center and Digital Journalism. It found that AI-powered search engines frequently misattribute news sources with an error rate exceeding 60%. 60%.
Speaker 1:Yikes.
Speaker 2:I don't know about you, but if I am looking for a source or something that I am Googling even I don't want an error rate of 60%.
Speaker 1:And they don't tell you the error rate. And that's one thing I've been complaining a lot about any AI application. They just put it in and you don't know what the error rate is. You have no way of gauging or evaluating the validity and the usefulness of this capability, and they do. The problem with AI is if you say, well, I'm going to give you sources, okay, are you going to check the sources, what if the AI misattributes the sources or just says that it got it from this source and didn't, and you can tell me well, that would never happen. Oh, that has that look the thing has. Even AI has made up sources.
Speaker 2:Right, it's already implementation the study. The study goes on to say that. So it said uh, researchers tested aai search tools by providing direct excerpts from real news articles and asking the model to correctly identify the original headline, publisher, publication date and url. The models often provided incorrect or fabricated citations. Instead of declining to answer when unsure. That's the problem. They don't say I don't know. They answer with something that is just not correct. And I think that goes to other things too, not just these news sources. Right, I mean, it's anything you're asking.
Speaker 1:Yeah, but I mean and this is the thing and people don't seem to understand it's called artificial intelligence. But even the term how do I say this? The term intelligence is artificial, as in. It's not intelligent. The thing is not thinking Again.
Speaker 1:What AI does is probabilistic. It tries to determine what will be the most likely thing to say based on this training data, and then throw in a little bit of randomness here and there to make it human sounding right. And that's a problem, and we'll talk about an article in a second. If we're taking that as our main research or sourcing of things, that's a problem. It shuts down a part from my perspective, a part of your brain, because you're offloading that responsibility to the system and you become not a thinker but an asker. And the thinking to the answer you leave it to the AI, because that's what the AI kind of imposes upon. You Ask me questions, I give you answers. So an AI prompt engineer is just learn how to do questions. That's important. But how about thinking? How about actually thinking and actually researching and actually doing certain things? I mean, does that make sense to you, heather?
Speaker 2:Yeah, it definitely makes sense doing certain things. I mean, does that make sense to you, heather? Yeah, it definitely makes sense, and actually that kind of leads right into the next, the next topic, which is the slow collapse of critical thinking in OSINT due to AI. So there was an article by DutchOSintguycom, dutch.
Speaker 1:OSINT guy. Dutch OSint guy.
Speaker 2:Dutch OSint guy. I'm reading it as a URL. Thank you, dutch OSint guy. Thank you, and that article warns that OSINT analysts are increasingly over-relying on generative AI tools like ChatGPT, copilot and Gemini, and it's eroding critical thinking in the OSINT field completely Well, and it doesn't.
Speaker 1:I want to make some comments on his article, but before I do that, I want to highlight some comments in the chat, right? So Brett Shavers again a friend of the show, a personal friend. We love him, great expert. He says AI can never be an expert because he will never admit being wrong, and I like this other comment that he makes we're gonna be the last generation of cognitive thinkers and it's kind of funny and sad because, yeah, we, we actually might be the last generation of people that actually care enough to think about things yeah um, so so for the article I made, I can mention examples of how, in the field of OSINT OSINT means open source intelligence you might need an example.
Speaker 1:He gives profile person for a particular case and you ask the AI to profile the person and it gives you all these results, really clean, really nice, about the person different interests. These results really clean, really nice about the person different, different interests. But the ai uh, decides to, let's say the example uh, he gives omit information from far-right forums. Right, because the ai was never trained to look into these far-right forums, right. And then, and you understand this, right, some people, um, keep different facets of their life separate, right, there are one person over here and another person over there, so an ocean investigator. His or her job is to look at everything that they can, right. But if you upload it to the AI and the AI doesn't have visibility into it, what happens? Well, most likely that analyst will just take that data, put in a report and, hey, this is good, this person is good to go, and they have no knowledge of this other, not destructive, but tendencies that need to be looked into. Let's put it that way. Does that make sense, heather?
Speaker 2:It does. It makes perfect sense With the OSINT field. The article goes on to say that you need to preserve human judgment and resist the temptation to offload thinking entirely to ai. We have to resist the temptation to rely on ai. It's tough because you can easily go to any of these ai sites and just type a question and it spits back an answer immediately.
Speaker 1:Yeah, and I don't believe this is, I mean, this is my opinion. I don't believe you can give the AI all the information that's been produced by humans or nature and train it to do a model. And remember, a model is not a real thing right, it's the best approximation of something, in this case, having an all-encompassing model of the universe. It's impossible. It's too much information. So AI is going to make an approximation as best as they can of what all the interactions in the universe might be. That's why it's a model.
Speaker 1:Right, you have a car, a model car, it's like the car, but smaller right, and they do the best job at it. Car it's like the car was smaller right, and they do the best job at it. There's a point where there's diminishing returns and right now it's being ubiquitous, it's being thrown at us from all angles. The one thing I would say to everybody is you have to not only ask questions to the AI, but also question the responses, and at some point you got to realize that, depending on what your use case is, maybe thinking is the better, faster way of getting to where you need than asking the AI to then question the answers, to then verify those answers. Right, it's definitely faster.
Speaker 2:I've asked the AI questions and then read the answer and been like no, that's definitely not right. And you tell the AI that's not the answer and it's like, okay, let me, let me try again. I'm sorry, it apologizes. First I'm sorry and then let me try again. It's a totally different answer. I just I mean, which one do you believe?
Speaker 1:Well, I like there's a thing called by coding but people, people do. Is they go and ask a chat, gpt or whatever AI about a coding problem and take the answer? They just keep, you know, pasting code from the AI until it works. It's like they're vibing the code. Oh my God.
Speaker 2:I've tried it, I've definitely tried that.
Speaker 1:How did it work for you? It does not work out. It does not work out well at all.
Speaker 2:So I've done it because I bug you all the time about coding and I'm like all right, I'm not bugging him today, I'm going to figure it out on my own, I'll get going and I'm like all right, it's still not working. I don't know what I did wrong, I don't want to bug you and I'll just throw it into the chat GPT, because I have chat GPT. I'll throw it in there and be wrong. So anything I had right is now gone, and it's just. It's so terrible at that and you know what, I shouldn't be relying on it. How lazy of that is. Is me right? How lazy of me does that? It's just insane.
Speaker 2:And I'm doing it because I don't want to bug you.
Speaker 1:Well, I mean, I mean, which is which is ridiculous? I told you to bug me all the time.
Speaker 2:I know, I know bug me.
Speaker 1:You're not bugging me, but even people I mean, I know coding, you say, well, maybe I might use it. But what happens? At least when I use it, I end up finding the errors. I look at the code, like, well, this is not going to work because of this, so I end up spending time correcting it till it works right.
Speaker 2:And then I look back and say, if I done it from the beginning, I would have been done already and part of the problem is I don't know how to do it all yet because I haven't finished all the classes. I know, I know, but I don't know how to do it, so I can't recognize the things that it's getting wrong like you can, so I'm just going with it and it's not going to work. It's never going to work.
Speaker 1:Well, and you make a point on that, right. So obviously we ask these questions to the system, you know, hoping that we can answer to things that we don't know. And that's the thing, right? Even questioning the answer, if you don't have a broad understanding of the topic, you're never going to know where is it wrong, right, like in forensics. If you're a really well-versed examiner and using AI and you can verify some of it, that's fine. But the reality of our field is that most folks don't have the level of expertise to use this tool and verify the outputs correctly, and they're going to be copy pasting whatever it says and it might be wrong, and they don't have the ability to detect that it's wrong. So we got to really think about how we integrate that if we are going to do that.
Speaker 2:Yes, I completely feel this comment right here from Aaron. Hey, chatgpt, can you please break this code? That was working, but I was trying to add one little thing, so it's usually what it is with me. I'll take something that I already have working and I'll find an additional field in the SQLite database that I want to support in one of the Leap artifacts, and I won't exactly know how to incorporate that. So I'll just ask it to add this one little thing and then everything I had working is just done. It's just done, so that comment's perfect.
Speaker 1:Perfect. And the little thing the AI asks like a hundred lines of code and like how can you add more code than the one I have had already?
Speaker 2:No, Like this little thing. No, oh, it's insane. So yeah, I mean it could be a good tool to be helpful, but how helpful is it if it's steering you in the wrong direction?
Speaker 1:Well, I mean again, my take of that is, if you have a level of expertise on a topic and you want to use the AI to kind of give you an assist in the context of, maybe, something that based on the data you can think about, sure you can do that. But if you're not an expert I don't want to say expert knowledgeable on the topic you're looking at, you got to be careful. You won't be able to verify the output since you don't have the knowledge.
Speaker 2:Yeah, well, here's another huge problem that Brett brings up too, though. I asked ChatGPT a question and its answer was a paragraph from a blog post that I had written years earlier. It was my post, word for word, without attribution. Yeah, so in one of these articles actually I think in both articles it addresses that very issue, like it's not giving the publisher the traffic on the website for one and it's also not giving them the credit for the article, and, honestly, if you asked which source it was, brad, it probably wouldn't have even known it was you.
Speaker 1:Well, and I've been blowing the horn or the alarm on this since last year and imagine you go to court and you did some work using Windows or Excel, let's say, or Celibrate Magnet MSA, one of those tools but you used a cracked version. You didn't pay for it. Right, you didn't pay for it. And they'd ask you at court well, did you upkeep your licenses? Was this a licensed product? You're like no, why do you think it's going to happen? You stole the product to use it for a product.
Speaker 1:Okay, that being said, imagine using AI. And what about the attribution, the legality of the training data. Was it acquired properly? Was copyright protected? Is this tool licensed? It's modeling data properly? Will that come by you in the behind later, when you use the tool and you know it's not that you used it, but that tool was stealing for lack of a better word data from all around to train this model? And this is something that we just put it on the system and we don't think about the provenance of the training data. There's legalities around it. Like I said before, brad, put this article out. You can reference it, as you know, to fair use, but you shouldn't just grab it and put it out there with no attribution right. So there's some things to think about. And how might that affect our workflow if we don't take time to understand the provenance of our training data for these AI systems?
Speaker 2:the provenance of our training data for these AI systems. Definitely Continuing with the AI theme, the National Institute of Standards and Technology recently I believe last month issued updated directives to scientists collaborating with the US Artificial Intelligence Safety Institute. The directives remove references to AI safety, responsible AI and AI fairness scientists collaborating with the US Artificial Intelligence Safety Institute. The directives remove references to AI safety, responsible AI and AI fairness and instead emphasize reducing ideological bias to promote human flourishing and economic competitiveness. What do you think of that one?
Speaker 1:Look. So my take on that was, on LinkedIn, that we should really look into international standards. Not because they're perfect, let me put it this way more resistant to influence from outside factors. Let me put it that way, right, it's more resistant when you have a really broad-based process to get to an agreement on what these standards are. Now, they're not perfect, right? Especially in some comments that Brett was making in that conversation, those comments section in LinkedIn, for example.
Speaker 1:There's one thing about the technical way of doing something and then that technicality of what you did how is it presented at court? Is it enough? Is the standard enough to be able to use that data in the court in the us versus a court in europe versus a court in, you know, somewhere in the pacific, whatever, right, those are fair things to to consider. But again, I will really try to impress upon everybody the need of international standards, to try to look for them. Um, because some standards that are really nation focus, um, they're more for what we've seen, more prone to be influenced by outside entities and, and the problem with that is, uh, the science is a science and any outside influences should be more science. That's the only influence. Science is made better by more science, right. And even when we talk about ethics, ethics is science, right. Empathy is science. We get that. We have scientific ways of understanding it. It has to be science. Hopefully that makes sense for everybody.
Speaker 1:Look into those international standards. Let me put here a comment from Jess Sure there are many bodies that are international standard based on only the process but also the people. For example, dfrws and DFER Review are international bodies in terms of peer review, and it's so important. Our international bodies in terms of peer review, and it's so important. A lot of the country or regional bias can only be canceled out when it's presented at international conferences, international symposiums, international organizations, where everybody can then collaborate and come to an agreement there. So that's all I'm going to say about that, because I don't want to get in too much of a pickle here.
Speaker 2:Yeah, that's good, that's good. So let's move on to a recent blog post by Christian Peter, who we love his work. Christian Peter is the creator of YouFade that we've talked about on the show quite a few times in the past and actually, before we get into his recent blog posts, there were also recently some updates to Ufade. So if you're using Ufade, go out and get that new version with the updates and improvements that Christian has implemented. But his blog post discusses how some forensic tools may inadvertently alter or delete unified logs and if you've listened to the show at all or read any of Lionel Notari's work on unified logs, there's some good stuff in there, right? There's some really important data that's in those logs and he's emphasizing that the critical importance of these logs in forensic investigations and recommends that when the device is presented with consent meaning that the device is unlocked or you have the passcode that you extract those unified logs prior to connecting and extracting with your forensic tools.
Speaker 1:Yeah, yeah, I mean orderable utility is something that changes all the time. I remember that again, the days when you got to the computer and what do you do with it? If it's on, you would unplug it so you could stop any changes to the system, Just unplug it and then take it and image it right. And then we learned that memory is kept and I say kept, but it's kept in the RAM but if you unplug it, that memory is lost, that content is lost, and we changed the order of volatility on how we get those Same thing with mobile devices. That's changing all the time. For example, now we're getting extraction from Android from that RAM, and so will that change our process of acquisition? That's discussions that are going.
Speaker 1:But this is my big thing with this issue and I try to be as straight shooter as best as I can. So I'll give you everybody straight. The tools, the tooling is wiping these logs before we get a chance to acquire them, and this was noted not by the tool makers but by community members. Okay, and there is silence, there's radio silence to the community in regards to this issue, and it's up to the community folks like us to let you know that if you're using an extraction tool that generates full file system extractions from a device. You're using an extraction tool that generates full file system extractions from a device. You need to pull out the Apple Unified Logs, the SysDiagnose Logs.
Speaker 2:Oh, I lost you.
Speaker 1:Testing, testing.
Speaker 2:There we go, I got you.
Speaker 1:Okay yeah, my own computer went like. I heard the disconnecting connecting sound, I freaked out for a second.
Speaker 2:You sound a little bit like you're in a tunnel now.
Speaker 1:Yeah, that sucks. Maybe I'm using the, maybe my main microphone got lost, so that's why.
Speaker 2:Might be. I can hear you now, though, okay.
Speaker 1:Sorry, everybody got lost. So that's why, might be, I can hear you now though, okay, sorry everybody, I was in mid-rant, so yeah, so, uh, yeah, I think I got folks telling me I think on the um, you know, and yeah, I got, I got, I got people telling me it's not, you couldn't hear, but not now you can hear me, all right, so it's a thing, um, you, you need to pull those things out. And again, let me explain something real quick here for people to understand. We're not speaking as members of our agency. We don't represent the government, we represent ourselves and we're commenting on something that was made known by a community member to the whole community and we're going off of that right Based on that knowledge. And if that's the case, then you need to do these things right and it's up to everybody that's listening to go and try it out yourself.
Speaker 1:Is your tooling, deleting those, get a test phone and extract it, do a full file system and see what you get in the logs, if any right, and you test it on your own right. This is the science of it, right? So I'm going to make that clear. We're not here advocating or speaking for employers or agencies. We're just members of the community, and this is what we're seeing. This is what people, anecdotally, are telling us. That being the case, you need to pull those logs out before you get your full fascism extraction with the tooling From a community member perspective.
Speaker 1:And from a community member perspective, we would like to hear from the vendors on this as sooner rather than later, beyond whatever technical solution they might provide which I believe they have to about the guidance on that order of volatility, and I would like to hear justification in regards to why is this happening? Because now you've got tooling that possibly, based on anecdotal evidence per title yourself, is deleting data from the system, and that could be problematic. In this field, we're trying to do the opposite. We're trying to acquire and preserve. So if you have a great reason for it, that's fantastic, but whatever your reason is, the preservation of the data is higher in that order. I don't know if you agree with me and I'm kind of going out on a limb here, but that's my take as a member of the community and I want to make that clear to everybody. I noticed I never mentioned any mentor by name, because that's not up to me to do. Right, you go and test it out. Go check the articles from. It was Christian right that made the article.
Speaker 2:Yeah, Christian.
Speaker 1:Yeah, christian made that. Checked his article out and do your own testing. That's what we're here for. We don't believe just because we do some testing and then we see if that's the case or not. And if that's the case, then we need to then try to see how we mitigate, uh, those, those issues, um, hopefully I mean I know some people get mad at me well, too bad, so sad, it is what it is get the logs first.
Speaker 2:You can use christian's tool you fade to get the logs first yeah, get them.
Speaker 1:Get them. Preservation look, listen'. Podcast. They talk all about preservation and acquisition, and not precisely, maybe directly on this example, but the importance, what does preservation mean? Right, and the legalities around it, and that's something to consider. We are here to preserve data. Our tools and support support us on that. And again, tools are great.
Speaker 1:The blindly trusting tools is a problem. Blindly trusting tools, blindly trusting AI. I don't care who the vendor is. This is not about a particular vendor or set of vendors. This is just the concept, the moral concept of how we do our work. Again, our property. That's the moral value, the values that we bring to our work, our due diligence are we doing the job as we need to do it? And attention to detail Are we making sure we got all the data that we need? And if our tools are failing one of those things our moral values compel us to speak out and look for mitigating solutions and for permanent solutions. And I would say reach out to your testing, to your tool vendors on the topic, because I believe it needs to be addressed. Yesterday, yesterday, Agreed.
Speaker 2:So another LinkedIn post that we saw the last few weeks is from Jordan Hare and it's titled Not All Encryption is Created Equal. So this article is about encrypted applications like the gallery vault and how they often boast about their great security but really in the forensic world, it's poor security. We can recover data through forensic analysis, through extraction. There's often weak PIN codes, poor encryption and just data that makes these applications vulnerable when we extract and when we parse and analyze. So it's a good article Specifically talks about the Gallery Vault, which is an Android app designed to hide and encrypt files. We can definitely get into the gallery app, even though it boasts that great security. Yeah, I can think of like a whole bunch of other applications that have this like really big headline that where they're like we are encrypted end to end, nobody's going to get your data, and it's not always the case.
Speaker 1:Well, I mean the weak point in. So let me put it this way In transit, it's really well protected, right. It's well protected the data. They usually do a good job and it's not hard, right. If your app is a browser that's taken care of right, ssl will take care of encrypting your stuff as it moves through the wire. It's not rocket science, right? If you're really lazy, just use a browser as your app in an app as a browser and use SSL to get the data out.
Speaker 1:The issues are the endpoints. Endpoint security is the big Achilles heel of security, right, and it could be used for lawful purposes Law enforcement, we look into endpoint security in order to be able to lawfully acquire evidence to prosecute and investigate crimes, but also nefariously. Bad actors will also do that through malware and attack the endpoint to be able to access the communications or other things for nefarious purposes. Even nation states, as we've seen in the news, do that. So there's an issue there. Now, since that's the Achilles heel of it, there are apps that really do a bad implementation of endpoint security within themselves, and this is not the first time I've seen that. There's some research done by his, his, his, outer. This this this examiner uses like a dog. I forgot the name of the, of the, of the cheese. I'll come back to me. I'll come back to me. I'll get to it later. If not, I'll put in the notes so you can check it out. What he did he went and reverse engineered the APK and found out that the password to open the encryption of the app was hard-coded in the app. What that means is you can put whatever PIN you want, it doesn't matter. The thing is encrypted with a hard-coded password, which means that if you, heather, have the app with data and I have the app with data and I get into your phone, I can use that hard-coded password to get into your vault, my vault, any vault, and actually we have that implemented in Alib. We got two or three different vault apps that use that hard-coded password and we implemented it and you can get into them and it still works. It's been like years and they still work and they're pretty popular apps.
Speaker 1:Incidental, chew Toy. See Josh to the rescue man. Josh is the best. I'm so happy to have you in the chat man. I love you. I saw Josh a few days ago at the event. I didn't have a chance to spend as much time as I wanted with him because you know, busy guy, celebrite event, rock star at Celebrite. So I'm hoping to spend way more time with Josh at Techno because we're going to see him there, so, yeah. So Josh came to the rescue with the whole. So the incidental tutor, he made that research and it's implemented in ALEEP, right Endpoint security.
Speaker 1:So if you're an examiner, you got to open, broaden up your scope of investigation, right? So you might need to look into how do I reverse engineer an APK? And classes, at least for forensic purposes, classes like the mobile forensics class that Heather Barnhart gives for SANS teach you how to do that, how to go in and pull things from those APKs. If you don't know what an APK is, that's the program that runs on your Android device and you can do the same thing with other mobile device apps and look into those and you'll be surprised what you find. Like Kevin is saying a lot of it is encoding and people don't see the difference between encoding and encryption, right, absolutely Big difference, big difference. So part of our job is not only hitting the button and saying, well, it was not decoded, I can't get in. No, do some investigation. Take that APK, reverse engineer it. Get that Java jar and make it unopening and look for strings. Try to see if you can find something that might be of interest, and you will be surprised what you will find.
Speaker 2:I was just reading on twitter or x or whatever the heck it is. I know I shouldn't be on there anymore it's a cesspool of of garbage now but I was reading on it. There was somebody talking about um and I don't even know who it was, I don't even know how it got into my damn feed but somebody talking about how, oh, they'll never get my messages because I use signal. Yeah, yeah, it kind of fits right along with this, but yeah.
Speaker 1:There's a word for that. It's what's your threat model right? So if your threat model is I'm worried that somebody, based on my circumstances, that they will get my my conversation because they're intercepting my communications, well, that that's. That's a lower threat level for that. But if your threat model is somebody is trying to actively hack into my phone like a nation state, well, just saying I have signals and I make it better, that might be not enough.
Speaker 1:The app is good. There's nothing wrong with the app. It's probably, I think, one of the better implemented communications apps and the source code is open. People test it and validate it all the time. So I don't want to say that it's bad, it's really good. But your threat model has to, depending on what the risks are, has to widen. If you're like, let's say, in a country you're some sort of, let's say, like resistance, liberty-loving person and you're trying to bring democracy in your home country and the government doesn't want you to, it's a made-up example Then you got to. Your threat model has to be way wider than maybe a person that doesn't want some snooper listening or reading their messages. Does that make sense? Heather, more or less.
Speaker 2:Yeah, definitely.
Speaker 1:Yeah, think about what your threat model is, what are the issues that you're confronting, what are your risks, what are your vulnerabilities, and then you can, you know, prepare accordingly and, you know, do whatever you need to do.
Speaker 2:All right. So how about we go over an artifact? I have been planning like some artifact of the week when we do the podcast, even though it's not every week, but this week I'm going to show a little bit about Snapchat. So Snapchat there are tons of databases that are related to the Snapchat artifact. I just did a few screenshots here of some of the databases that are related to Snapchat. So when you're doing an investigation and you find a video or an image that's related to Snapchat, what do you do next? Like how did it get there? Did the user record it? What other information is available about the Snapchat video?
Speaker 2:So this video here you can see the name of the video is a long letter number combination and it's got the dot decrypted at the end. So I'll say I used Celebrate and they support decrypting for this particular artifact. So I have the file name. I don't really have much else for information about this video. So I wanted to know about this video. I was doing some testing on one of my test phones and what other information can I find out about this Snapchat video? So the cash controller database is where you go find what's called the column is called an external key and that relates to the Snapchat video. So I took the name of the video and searched it in the cache controller database. The table is the cache file claim and when I searched the name of the video, I get back the user ID that's related to the video, which was my Snapchat user account, and then a column called external key. I take the letter and number combination of called external key. I take the letter and number combination of the external key and the first thing I do is I do a search across the entire image. I wanna see where that video, where that external key, is hitting and what databases it's hitting in. So here's just a visual of all of the different databases from Snapchat that that video external key hits in.
Speaker 2:Once I did that, I started looking through the databases. One of the databases related to Snapchat is called SCDB-27. And it's a SQLite database and inside of that database you can find created times, you can find save times, snap capture times. The Z entry ID is actually that external key. So that's how I how I located it in this database and in this particular table.
Speaker 2:Another table in that same database has information about whether the video has a location or not. This one has a one that was a flag for yes, my video has a location data. Has again the capture time created time, duration of the video and other information related to this video. This video it also hit in the memories asset repository, sqlite. I had saved this video as a memory in Snapchat, so that is showing that it is saved in memories and then in the gallery encrypted DB. That is decrypted by Cellebrite and I believe other tools decrypt it too. I'm not 100% sure, but I know Cellebrite decrypts it In the snap location table is actually that location information and I found it again by taking that external key and searching it in the database for the video. And the latitude and longitude actually comes back to just about exactly where I was parked in the parking lot before headed in to enjoy some sushi at a local sushi restaurant near me.
Speaker 1:As one does Get that sushi.
Speaker 2:Definitely, definitely.
Speaker 1:Which we'll be getting a lot at IASIS, but that's another story for another day. We better be. I expect it. We'll be getting a lot of that at IASIS, but that's another story for another day. We better be.
Speaker 2:I expect it. So with this I kind of just wanted to show how you can connect the dots on a Snapchat image or video when it's parsed by tools by certain tools, you may not see the entire picture. You may have to go into the databases and find some of that information about the video yourself. The video was parsed perfectly. I could watch it. It's actually me filming the parking lot of the Hanzo Japanese Steakhouse and I was able to just prove a whole bunch of other data about this video. So if it was part of one of my cases, I would have had the exact location that the video was recorded and I would have had all of the capture times, if it's a memory, if it's shared, you can find that this one wasn't shared, but all kinds of other information just by finding that external key in the cache controller database and following the breadcrumbs to all of the other databases related to Snapchat yeah, and you make such a great pointrumbs to all of the other databases related to Snapchat.
Speaker 1:Yeah, and you make such a great point for everybody to kind of remember, right. And if you go back to the first slide, the tools will find the media with their media finding procedures, right. But notice and this is not a dig on any tool, this happens a lot on all tools they take that media if it's come from a decrypted container or from a database, there is no metadata there and if you look at it you're like, well, there's no metadata and it's super important. But that data might be somewhere and it might be inside a database, which, it is right, and there's no like, there's no mapping, no position, nothing, but the lats and longs are there as well. So we have to go that step further and figure out could this reside somewhere else?
Speaker 1:And you know this because you know how apps work. You know that some of these apps, especially apps that receive data and send data, will keep track of that media on another location. And we're talking about it before the show. If you receive a piece of media it's called image one and you receive another that's called image one, you can't, they're going to collide and one might be deleted, right? So while these systems do this absolute, they keep track, with individual IDs for every piece of media and they keep the file name on different metadata in databases for them to reproduce them to you without avoiding some conflicts naming conflicts, avoiding all sorts of different issues that might happen if they don't keep track of that information. It's on you to get it. I have another example, that one artifact that I worked on some time ago, one of the cache I forgot the name of the cache. Right now, one of the caches that Android Image Cache Manager.
Speaker 1:The Image Cache Manager. Do the tools find the pictures? Of course they find it. They're there, but it doesn't give you the context of these images are related to the app and they were rendered by the app. And some of these images for example, one of the gallery vaults they're all encrypted, but the little copies of the cache they're there. You know the contents of the vault because they're copies of it inside that cache and the app tells you that, not the app. Sorry, the artifact puts that together for you.
Speaker 1:Right and again, never just look at the tool and say, oh, I don't see a timestamp, it doesn't exist. Think about the app. How does the app work? Is it coming from a decrypted database or decrypted data source? And if that's the case, think that that metadata is going to be managed by the app somewhere else. The question is, where is it? And it might be inside a protocol of data structure, it might be inside a SQLite database, it might be JSON inside SQLite, which we've seen a lot, right, heather? Yes, so the idea for you as an examiner is to have that thought process, that you're going to step into that gap where the tool doesn't show you something, because there might be more times than not, something to actually obtain that has value for your case.
Speaker 2:Yeah, and this one. I mean the stuff I need is parsed by the tools. The location is parsed, the video is parsed. I just wanted to find additional data and see if there was anything else that may be helpful in a case. Right, and there is. I may not have correlated that that video was in the locations tab, but going and following it through its entire lifespan in the Snapchat databases will just give you the full picture related to that there's some memories, right.
Speaker 1:What makes a memory a memory?
Speaker 2:It was saved. I don't even know how Snapchat really works all the way, but it was saved in my memories section of Snapchat. I actually had hit save at one point to get it there, so it wasn't in my camera roll part. It was actually saved into my memory section.
Speaker 1:That's important, right? Yes, that's important. It tells you about the user interaction, what was happening in the world at the moment that happened. Do the tools highlight that for you? Did it highlight the memories part for you? The tooling?
Speaker 2:I didn't see the memories part, no, but the location and the video I saw but that's the point I'm making.
Speaker 1:Right, you dig a little bit and now you're finding the memories. There was some overt action from the user to put this here as opposed to there. Yeah, and it's a made-up. It's an example, but it could be. It could mean many different things. The thing is you need to know that it's there to make some determinations about what happened.
Speaker 2:Yeah, on older versions of Snapchat it's not there anymore, but there used to be a plist called storiesplist and that would actually tell you if the video was shared to the user's stories. That plist isn't there anymore, so I need to do some further testing and investigating to see where that data is stored, because it's still somewhere, Maybe one of the databases I just haven't updated recently and that's kind of obsolete now in some of the newer versions of Snapchat.
Speaker 1:There's a lot of questions in the chat. I mean we're out of time, we're five minutes over time, but I just want to make a quick point for the folks asking some questions, if you're really interested in the provenance of images the forensic scooter Scott I forgot his last name. I almost forgot his last name, coining, scott Coining yeah, he did an incredible set of articles and artifacts in the Leaps in iLeap that look at the photosqlite database. The photosqlite database keeps track of a lot of better data about your pictures in general and one of the fields that I like is that bundle ID field which tells you if that picture, where it came from, if it came from an app and, if so, what app it came from. So the folks asking about provenance I'll celebrate PA Insights, pa as a module for provenance of images.
Speaker 1:I forgot the name of the media origin. It's really good because it's probabilistic in the sense of mixed-rater termination that you have to verify. But it's good because it highlights possible images where they came from, if they were downloaded, if it came from a particular app, if it was generated by the phone, and again it adds up different artifacts or indicators to make a determination. Then you can go and validate right. Verify and validate. That is well. Let me take that back To verify that it's good. Validation is another thing To verify that it's correct and I find it to be useful for that. So there's many ways of investigating provenance of media in iOS devices and there's a lot of articles and tooling that might help you on that.
Speaker 2:Yeah, definitely the one comment that asks can metadata specify if the video coming from the gallery of the internal phone are made through the app? There are indicators and flags for that. I had a coworker who actually had a case and he was following the path and I don't remember exactly how he figured it out, but I'm going to get back to Malik on that, because I know that we had a case that actually involved the image was coming from the gallery and then it was shared in the Snapchat and there is a way to tell that.
Speaker 1:So I'll get back to you on that. And Josh the man I mean he he clearly states that Android has something similar in the external db SQLite data store, right? So it has a field called owning package or something like that, where it also tells you the provenance where the picture came from. So you get places to look in iOS and Android when that's important for your case places to look in iOS and Android when that's important for your case.
Speaker 2:So let me remove this here. I don't know if we missed any of the comments, but if we did, you guys can reach out to me on LinkedIn if I missed anybody's question Questions. Yeah, so we are to the end. It's been an hour and eight minutes and we have the meme of the week.
Speaker 1:Oh wait, wait, wait, before we do that. Yep, scott is also in the chat, in experts in this chat. I'm sorry, this is the smartest data forensics hour in the planet. Every week we come out because we got the experts like oozing out of our chest. It warms my heart, right. So Scott, the man, the world expert on photosqlite because he is he won't accept it. He's really humble, he's a really nice guy, but he's the world expert on this stuff. He's coming with some blocks, steps that can be taken to gain confidence in this analysis in iOS device capture media. So I'm really looking forward to that when that comes out, scott, and thank you for giving us a heads up that it's coming. We appreciate it.
Speaker 2:Oh, and when we were at the Celebrate conference before we go to the meme of the week here Scott did a presentation on his PhotoSQLite, and they have a really limited amount of time.
Speaker 2:I have never seen somebody pack so much information into 45 minutes in my life, because that photo, sequel, light stuff that's a lot Like. There's a. He has done so many queries for that and it's a lot to talk about and he just you should have heard how fast he was talking, but he got it all in there and it was was excellent, excellent presentation absolutely.
Speaker 1:Uh, you know it was well headed because you're like, I got information now already definitely so.
Speaker 2:If you ever get the chance to hear about the photo sequel like live from him, definitely recommend it's an experience what's that?
Speaker 1:it's an experience yeah, definitely so.
Speaker 2:We it's an experience. Yeah, definitely so. We have the meme of the week. You take this one away.
Speaker 1:Explain away. So you got the classical meme of the guy with a girlfriend and he's looking at another girl that's walking by and the girlfriend's like, what are you looking at? What? So the guy looking is called Dieter Francis, examiner with quotations, and the girlfriend is offended because he's looking somewhere else and looking at another girl. It's called copy and paste from the tool report. And the girl that's walking by that's so attractive to this guy, to the examiner, is copy and paste from Gen I, from Generative AI, and I made the point on that because I'm really trying, through this time period where AI is being implemented in our tooling and our procedures, to create some awareness that our processes and the way we do work as individual examiners need to revisit that and revisit it every so often.
Speaker 1:This habit of taking tool output and just putting it out in our reports with little verification, it's bad, but doing the same, applying the same concept, to generative AI within our tooling, is going to be way worse. So at the end of the day, the issue is not the tool or the reports, or even Gen AI right. The issue here is the examiner, us, because at the end of the day, the ones responsible for the acquisition, preservation analysis. The parsing, analysis and reporting of that data is us. It's you that are listening. It is the person that's watching this podcast right now. It is the person that's watching this podcast right now. So it's on us to be able to have that property, moral property, have that attention to detail and have our due diligence, and we cannot offload, outsource that either to the tool reports or Gen I generative AI. It's up to us and it's a call for us to to really up our level of expertise and look for ways to actually be effective in the best way possible couldn't agree more.
Speaker 2:All right, that's all I've got, that's all we've got so, yeah, nothing else. Nothing else for the good of the order, heather I have no more topics and we've gone past our hour. Not that it matters.
Speaker 1:Not that it matters yeah, we do it every episode anyway, so yeah, we do.
Speaker 2:You're right, it's not the longest one that's true, that's true.
Speaker 1:So again I want to thank all the folks and they've been chatting with us, watching live jess and brett and scott and kevin, and all the folks that we had a chance to recognize in the chat. Uh, malik making great questions, um you, you made the show interesting to had a chance to recognize in the chat, malik making great questions. You made the show interesting to us and interesting to others. So thank you for being here live. The folks that watch later and listen later. We also appreciate you for supporting the podcast. Hopefully it brings value to you a little bit of humor every now and then. And feel free to reach out to Heather, not me, I'm kidding.
Speaker 2:No, honestly, there were a few questions in there that I think I missed so seriously.
Speaker 1:If anybody wants to hit me up in the linkedin messages, I will attempt, attempt to answer your questions absolutely and, don't worry, we will collaborate. So I well, again, thank you everybody. Uh, we love you to pieces and uh, let's keep investigating, let's keep it going. Yeah, thank you, take care, and everybody, as I put the music on, have a fantastic night, take care, and then I hit play, we'll be right back. Thank you.