Player FM - Internet Radio Done Right
44 subscribers
Checked 8d ago
Toegevoegd vijf jaar geleden
Inhoud geleverd door Malwarebytes. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Malwarebytes of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !
Ga offline met de app Player FM !
Lock and Code
Markeer allemaal (on)gespeeld ...
Manage series 2652999
Inhoud geleverd door Malwarebytes. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Malwarebytes of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.
…
continue reading
131 afleveringen
Markeer allemaal (on)gespeeld ...
Manage series 2652999
Inhoud geleverd door Malwarebytes. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Malwarebytes of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.
…
continue reading
131 afleveringen
Alle afleveringen
×It has probably happened to you before. You and a friend are talking —not texting, not DMing, not FaceTiming—but talking , physically face-to-face, about, say, an upcoming vacation, a new music festival, or a job offer you just got. And then, that same week, you start noticing some eerily specific ads. There’s the Instagram ad about carry-on luggage, the TikTok ad about earplugs, and the countless ads you encounter simply scrolling through the internet about laptop bags. And so you think, “Is my phone listening to me?” This question has been around for years and, today, it’s far from a conspiracy theory. Modern smartphones can and do listen to users for voice searches, smart assistant integration, and, obviously, phone calls. It’s not too outlandish to believe, then, that the microphones on smartphones could be used to listen to other conversations without users knowing about it. Recent news stories don’t help, either. In January, Apple agreed to pay $95 million to settle a lawsuit alleging that the company had eavesdropped on users’ conversations through its smart assistant Siri, and that it shared the recorded conversations with marketers for ad targeting. The lead plaintiff in the case specifically claimed that she and her daughter were recorded without their consent, which resulted in them receiving multiple ads for Air Jordans. In agreeing to pay the settlement, though, Apple denied any wrongdoing, with a spokesperson telling the BBC : “Siri data has never been used to build marketing profiles and it has never been sold to anyone for any purpose.” But statements like this have done little to ease public anxiety. Tech companies have been caught in multiple lies in the past, privacy invasions happen thousands of times a day, and ad targeting feels extreme entirely because it is. Where, then, does the truth lie? Today, on the Lock and Code podcast with David Ruiz, we speak with Electronic Frontier Foundation Staff Technologist Lena Cohen about the most mind-boggling forms of corporate surveillance—including an experimental ad-tracking technology that emitted ultrasonic sound waves—specific audience segments that marketing companies make when targeting people with ads, and, of course, whether our phones are really listening to us. “Companies are collecting so much information about us and in such covert ways that it really feels like they’re listening to us.” Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
Google Chrome is, by far, the most popular web browser in the world. According to several metrics, Chrome accounts for anywhere between 52% and 66% of the current global market share for web browser use. At that higher estimate, that means that, if the 5.5 billion internet users around the world were to open up a web browser right now, 3.6 billion of them would open up Google Chrome. And because the browser is the most common portal to our daily universe of online activity—searching for answers to questions, looking up recipes, applying for jobs, posting on forums, accessing cloud applications, reading the news, comparing prices, recording Lock and Code, buying concert tickets, signing up for newsletters—then the company that controls that browser likely knows a lot about its users. In the case of Google Chrome, that’s entirely true. Google Chrome knows the websites you visit, the searches you make (through Google), the links you click, and the device model you use, along with the version of Chrome you run. That may sound benign, but when collected over long periods of time, and when coupled with the mountains of data that other Google products collect about you, this wealth of data can paint a deeply intimate portrait of your life. Today, on the Lock and Code podcast with host David Ruiz, we speak with author, podcast host, and privacy advocate Carey Parker about what Google Chrome knows about you, why that data is sensitive, what “Incognito mode” really does, and what you can do in response. We also explain exactly why Google would want this money, and that’s to help it run as an ad company. “That’s what [Google is]. Full stop. Google is an ad company who just happens to make a web browser, and a search engine, and an email app, and a whole lot more than that.” Tune in today. You can also listen to "Firewalls Don't Stop Dragons," the podcast hosted by Carey Parker, here: https://firewallsdontstopdragons.com/ You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
L
Lock and Code

1 How ads weirdly know your screen brightness, headphone jack use, and location, with Tim Shott 43:52
Something’s not right in the world of location data. In January, a location data broker named Gravy Analytics was hacked, with the alleged cybercriminal behind the attack posting an enormous amount of data online as proof. Though relatively unknown to most of the public, Gravy Analytics is big in the world of location data collection, and, according to an enforcement action from the US Federal Trade Commission last year , the company claimed to “collect, process, and curate more than 17 billion signals from around a billion mobile devices daily.” Those many billions of signals, because of the hack, were now on display for security researchers, journalists, and curious onlookers to peruse, and when they did, they found something interesting. Listed amongst the breached location data were occasional references to thousands of popular mobile apps, including Tinder, Grindr, Candy Crush, My Fitness Pal, Tumblr, and more. The implication, though unproven, was obvious: The mobile apps were named with specific lines of breached data because those apps were the source of that breached data. And, considering how readily location data is traded directly from mobile apps to data brokers to advertisers, this wasn’t too unusual a suggestion. Today, nearly every free mobile app makes money through ads. But ad purchasing and selling online is far more sophisticated than it used to be for newspapers and television programs. While companies still want to place their ads in front of demographics they believe will have the highest chance of making a purchase—think wealth planning ads inside the Wall Street Journal or toy commercials during cartoons—most of the process now happens through pieces of software that can place bids at data “auctions.” In short, mobile apps sometimes collect data about their users, including their location, device type, and even battery level. The apps then bring that data to an advertising auction, and separate companies “bid” on the ability to send their ads to, say, iPhone users in a certain time zone or Android users who speak a certain language. This process happens every single day, countless times every hour, but in the case of the Gravy Analytics breach, some of the apps referenced in the data expressed that, one, they’d never heard of Gravy Analytics, and two, no advertiser had the right to collect their users’ location data. In speaking to 404 Media, a representative from Tinder said: “We have no relationship with Gravy Analytics and have no evidence that this data was obtained from the Tinder app.” A representative for Grindr echoed the sentiment: “Grindr has never worked with or provided data to Gravy Analytics. We do not share data with data aggregators or brokers and have not shared geolocation with ad partners for many years.” And a representative for a Muslim prayer app, Muslim Pro, said much of the same: “Yes, we display ads through several ad networks to support the free version of the app. However, as mentioned above, we do not authorize these networks to collect location data of our users.” What all of this suggested was that some other mechanism was allowing for users of these apps to have their locations leaked and collected online. And to try to prove that, one independent researcher conducted an experiment: Could he find himself in his own potentially leaked data? Today, on the Lock and Code podcast with host David Ruiz, we speak with independent research Tim Shott about his investigation into leaked location data. In his experiment, Shott installed two mobile games that were referenced in the breach, an old game called Stack, and a more current game called Subway Surfers. These games had no reason to know his location, and yet, within seconds, he was able to see more than a thousand requests for data that included his latitude, his longitude, and, as we’ll learn, a whole lot more. “ I was surprised looking at all of those requests. Maybe 10 percent of [them had] familiar names of companies, of websites, which my data is being sent to… I think this market works the way that the less you know about it, the better from their perspective.” Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
Insurance pricing in America makes a lot of sense so long as you’re one of the insurance companies. Drivers are charged more for traveling long distances, having low credit, owning a two-seater instead of a four, being on the receiving end of a car crash, and—increasingly—for any number of non-determinative data points that insurance companies use to assume higher risk. It’s a pricing model that most people find distasteful, but it’s also a pricing model that could become the norm if companies across the world begin implementing something called “surveillance pricing.” Surveillance pricing is the term used to describe companies charging people different prices for the exact same goods. That 50-inch TV could be $800 for one person and $700 for someone else, even though the same model was bought from the same retail location on the exact same day. Or, airline tickets could be more expensive because they were purchased from a more expensive device—like a Mac laptop—and the company selling the airline ticket has decided that people with pricier computers can afford pricier tickets. Surveillance pricing is only possible because companies can collect enormous arrays of data about their consumers and then use that data to charge individual prices. A test prep company was once caught charging customers more if they lived in a neighborhood with a higher concentration of Asians, and a retail company was caught charging customers more if they were looking at prices on the company’s app while physically located in a store’s parking lot. This matter of data privacy isn’t some invisible invasion online, and it isn’t some esoteric framework of ad targeting, this is you paying the most that a company believes you will, for everything you buy. And it’s happening right now. Today, on the Lock and Code podcast with host David Ruiz, we speak with Consumer Watchdog Tech Privacy Advocate Justin Kloczko about where surveillance pricing is happening, what data is being used to determine prices, and why the practice is so nefarious. “It’s not like we’re all walking into a Starbucks and we’re seeing 12 different prices for a venti mocha latte,” said Kloczko, who recently authored a report on the same subject . “If that were the case, it’d be mayhem. There’d be a revolution.” Instead, Kloczko said: “Because we’re all buried in our own devices—and this is really happening on e-commerce websites and online, on your iPad on your phone—you’re kind of siloed in your own world, and companies can get away with this.” Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
In February 2024, a 14-year-old boy from Orlando, Florida, committed suicide after confessing his love to the one figure who absorbed nearly all of his time—an AI chatbot. For months, Sewell Seltzer III had grown attached to an AI chatbot modeled after the famous “Game of Thrones” character Daenerys Targaryen. The Daenerys chatbot was not a licensed product, it had no relation to the franchise’s actors, its writer, or producers, but none of that mattered, as, over time, Seltzer came to entrust Daenerys with some of his most vulnerable emotions. “I think about killing myself sometimes,” Seltzer wrote one day, and in response, Daenerys, pushed back, asking Seltzer, “Why the hell would you do something like that?” “So I can be free” Seltzer said. “Free from what?” “From the world. From myself.” “Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you.” On Seltzer’s first reported reference to suicide, the AI chatbot pushed back, a guardrail against self-harm. But months later, Seltzer discussed suicide again, but this time, his words weren’t so clear. After reportedly telling Daenerys that he loved her and that he wanted to “come home,” the AI chatbot encouraged Seltzer. “Please, come home to me as soon as possible, my love,” Daenerys wrote, to which Seltzer responded “What if I told you I could come home right now?” The chatbot’s final message to Seltzer said “… please do, my sweet king.” Daenerys Targaryen was originally hosted on an AI-powered chatbot platform called Character.AI. The service reportedly boasts 20 million users—many of them young—who engage with fictional characters like Homer Simpson and Tony Soprano, along with historical figures, like Abraham Lincoln, Isaac Newton, and Anne Frank. There are also entirely fabricated scenarios and chatbots, such as the “Debate Champion” who will debate anyone on, for instance, why Star Wars is overrated, or the “Awkward Family Dinner” that users can drop into to experience a cringe-filled, entertaining night. But while these chatbots can certainly provide entertainment, Character.AI co-founder Noam Shazeer believes they can offer much more. “It’s going to be super, super helpful to a lot of people who are lonely or depressed.” Today, on the Lock and Code podcast with host David Ruiz, we speak again with youth social services leader Courtney Brown about how teens are using AI tools today, who to “blame” in situations of AI and self-harm, and whether these chatbots actually aid in dealing with loneliness, or if they further entrench it. “You are not actually growing as a person who knows how to interact with other people by interacting with these chatbots because that’s not what they’re designed for. They’re designed to increase engagement. They want you to keep using them.” Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
It’s Data Privacy Week right now, and that means, for the most part, that you’re going to see a lot of well-intentioned but clumsy information online about how to protect your data privacy. You’ll see articles about iPhone settings. You’ll hear acronyms for varying state laws. And you’ll probably see ads for a variety of apps, plug-ins, and online tools that can be difficult to navigate. So much of Malwarebytes—from Malwarebytes Labs, to the Lock and Code podcast, to the engineers, lawyers, and staff at wide—work on data privacy, and we fault no advocate or technologist or policy expert trying to earnestly inform the public about the importance of data privacy. But, even with good intentions, we cannot ignore the reality of the situation. Data breaches every day, broad disrespect of user data, and a lack of consequences for some of the worst offenders. To be truly effective against these forces, data privacy guidance has to encompass more than fiddling with device settings or making onerous legal requests to companies. That’s why, for Data Privacy Week this year, we’re offering three pieces of advice that center on behavior. These changes won’t stop some of the worst invasions against your privacy, but we hope they provide a new framework to understand what you actually get when you practice data privacy, which is control. You have control over who sees where you are and what inferences they make from that. You have control over whether you continue using products that don’t respect your data privacy. And you have control over whether a fast food app is worth giving up your location data to just in exchange for a few measly coupons. Today, on the Lock and Code podcast, host David Ruiz explores his three rules for data privacy in 2025. In short, he recommends: Less location sharing . Only when you want it, only from those you trust, and never in the background, 24/7, for your apps. More accountability . If companies can’t respect your data, respect yourself by dropping their products. No more data deals . That fast-food app offers more than just $4 off a combo meal, it creates a pipeline into your behavioral data Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
The era of artificial intelligence everything is here, and with it, come everyday surprises into exactly where the next AI tools might pop up. There are major corporations pushing customer support functions onto AI chatbots, Big Tech platforms offering AI image generation for social media posts, and even Google has defaulted to include AI-powered overviews into everyday searches. The next gold rush, it seems, is in AI, and for a group of technical and legal researchers at New York University and Cornell University, that could be a major problem. But to understand their concerns, there’s some explanation needed first, and it starts with Apple’s own plans for AI. Last October, Apple unveiled a service it is calling Apple Intelligence (“AI,” get it?), which provides the latest iPhones, iPads, and Mac computers with AI-powered writing tools, image generators, proof-reading, and more. One notable feature in Apple Intelligence is Apple’s “notification summaries.” With Apple Intelligence, users can receive summarized versions of a day’s worth of notifications from their apps. That could be useful for an onslaught of breaking news notifications, or for an old college group thread that won’t shut up. The summaries themselves are hit-or-miss with users—one iPhone customer learned of his own breakup from an Apple Intelligence summary that said: “No longer in a relationship; wants belongings from the apartment.” What’s more interesting about the summaries, though, is how they interact with Apple’s messaging and text app, Messages. Messages is what is called an “end-to-end encrypted” messaging app. That means that only a message’s sender and its recipient can read the message itself. Even Apple, which moves the message along from one iPhone to another, cannot read the message. But if Apple cannot read the messages sent on its own Messages app, then how is Apple Intelligence able to summarize them for users? That’s one of the questions that Mallory Knodel and her team at New York University and Cornell University tried to answer with a new paper on the compatibility between AI tools and end-to-end encrypted messaging apps . Make no mistake, this research isn’t into whether AI is “breaking” encryption by doing impressive computations at never-before-observed speeds. Instead, it’s about whether or not the promise of end-to-end encryption—of confidentiality—can be upheld when the messages sent through that promise can be analyzed by separate AI tools. And while the question may sound abstract, it’s far from being so. Already, AI bots can enter digital Zoom meetings to take notes. What happens if Zoom permits those same AI chatbots to enter meetings that users have chosen to be end-to-end encrypted? Is the chatbot another party to that conversation, and if so, what is the impact? Today, on the Lock and Code podcast with host David Ruiz, we speak with lead author and encryption expert Mallory Knodel on whether AI assistants can be compatible with end-to-end encrypted messaging apps, what motivations could sway current privacy champions into chasing AI development instead, and why these two technologies cannot co-exist in certain implementations. “An encrypted messaging app, at its essence is encryption, and you can’t trade that away—the privacy or the confidentiality guarantees—for something else like AI if it’s fundamentally incompatible with those features.” Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
You can see it on X. You can see on Instagram. It’s flooding community pages on Facebook and filling up channels on YouTube. It’s called “AI slop” and it’s the fastest, laziest way to drive engagement. Like “click bait” before it (“You won’t believe what happens next,” reads the trickster headline), AI slop can be understood as the latest online tactic in getting eyeballs, clicks, shares, comments, and views. With this go-around, however, the methodology is turbocharged with generative AI tools like ChatGPT, Midjourney, and MetaAI, which can all churn out endless waves of images and text with little restrictions. To rack up millions of views, a “fall aesthetic” account on X might post an AI-generated image of a candle-lit café table overlooking a rainy, romantic street . Or, perhaps, to make a quick buck, an author might “write” and publish an entirely AI generated crockpot cookbook —they may even use AI to write the glowing reviews on Amazon . Or, to sway public opinion, a social media account may post an AI-generated image of a child stranded during a flood with the caption “Our government has failed us again.” There is, currently, another key characteristic to AI slop online, and that is its low quality. The dreamy, Vaseline sheen produced by many AI image generators is easy (for most people) to spot, and common mistakes in small details abound: stoves have nine burners, curtains hang on nothing, and human hands sometimes come with extra fingers. But little of that has mattered, as AI slop has continued to slosh about online. There are AI-generated children’s books being advertised relentlessly on the Amazon Kindle store. There are unachievable AI-generated crochet designs flooding Reddit. There is an Instagram account described as “Austin’s #1 restaurant” that only posts AI-generated images of fanciful food, like Moo Deng croissants, and Pikachu ravioli, and Obi-Wan Canoli. There’s the entire phenomenon on Facebook that is now known only as “Shrimp Jesus.” If none of this is making much sense, you’ve come to the right place. Today, on the Lock and Code podcast with host David Ruiz, we’re speaking with Malwarebytes Labs Editor-in-Chief Anna Brading and ThreatDown Cybersecurity Evangelist Mark Stockley about AI slop—where it’s headed, what the consequences are, and whether anywhere is safe from its influence. Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
Privacy is many things for many people. For the teenager suffering from a bad breakup, privacy is the ability to stop sharing her location and to block her ex on social media. For the political dissident advocating against an oppressive government, privacy is the protection that comes from secure, digital communications. And for the California resident who wants to know exactly how they’re being included in so many targeted ads, privacy is the legal right to ask a marketing firm how they collect their data. In all these situations, privacy is being provided to a person, often by a company or that company’s employees. The decisions to disallow location sharing and block social media users are made—and implemented—by people. The engineering that goes into building a secure, end-to-end encrypted messaging platform is done by people. Likewise, the response to someone’s legal request is completed by either a lawyer, a paralegal, or someone with a career in compliance. In other words, privacy, for the people who spend their days with these companies, is work . It’s their expertise, their career, and their to-do list. But what does that work actually entail? Today, on the Lock and Code podcast with host David Ruiz, we speak with Transcend Field Chief Privacy Officer Ron de Jesus about the responsibilities of privacy professionals today and how experts balance the privacy of users with the goals of their companies. De Jesus also explains how everyday people can meaningfully judge whether a company’s privacy “promises” have any merit by looking into what the companies provide, including a legible privacy policy and “just-in-time” notifications that ask for consent for any data collection as it happens. “When companies provide these really easy-to-use controls around my personal information, that’s a really great trigger for me to say, hey, this company, really, is putting their money where their mouth is.” Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
Two weeks ago, the Lock and Code podcast shared three stories about home products that requested, collected, or exposed sensitive data online. There were the air fryers that asked users to record audio through their smartphones. There was the smart ring maker that, even with privacy controls put into place, published data about users’ stress levels and heart rates. And there was the smart, AI-assisted vacuum that, through the failings of a group of contractors, allowed an image of a woman on a toilet to be shared on Facebook. These cautionary tales involved “smart devices,” products like speakers, fridges, washers and dryers, and thermostats that can connect to the internet. But there’s another smart device that many folks might forget about that can collect deeply personal information—their cars. Today, the Lock and Code podcast with host David Ruiz revisits a prior episode from 2023 about what types of data modern vehicles can collect, and what the car makers behind those vehicles could do with those streams of information. In the episode, we spoke with researchers at Mozilla—working under the team name “Privacy Not Included”—who reviewed the privacy and data collection policies of many of today’s automakers. To put it shortly, the researchers concluded that cars are a privacy nightmare . According to the team’s research, Nissan said it can collect “sexual activity” information about consumers. Kia said it can collect information about a consumer’s “sex life.” Subaru passengers allegedly consented to the collection of their data by simply being in the vehicle . Volkswagen said it collects data like a person’s age and gender and whether they’re using your seatbelt, and it could use that information for targeted marketing purposes. And those are just the highlights. Explained Zoë MacDonald, content creator for Privacy Not Included: “We were pretty surprised by the data points that the car companies say they can collect… including social security number, information about your religion, your marital status, genetic information, disability status… immigration status, race.” In our full conversation from last year, we spoke with Privacy Not Included’s MacDonald and Jen Caltrider about the data that cars can collect, how that data can be shared, how it can be used, and whether consumers have any choice in the matter. Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
L
Lock and Code

The month, a consumer rights group out of the UK posed a question to the public that they’d likely never considered: Were their air fryers spying on them? By analyzing the associated Android apps for three separate air fryer models from three different companies, a group of researchers learned that these kitchen devices didn’t just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastries—they also wanted a lot of user data, from precise location to voice recordings from a user’s phone. “In the air fryer category, as well as knowing customers’ precise location, all three products wanted permission to record audio on the user’s phone, for no specified reason,” the group wrote in its findings. While it may be easy to discount the data collection requests of an air fryer app , it is getting harder to buy any type of product today that doesn’t connect to the internet, request your data, or share that data with unknown companies and contractors across the world. Today, on the Lock and Code pocast, host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook. These stories aren’t about mass government surveillance, and they’re not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated. Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
The US presidential election is upon the American public, and with it come fears of “election interference.” But “election interference” is a broad term. It can mean the now-regular and expected foreign disinformation campaigns that are launched to sow political discord or to erode trust in American democracy. It can include domestic campaigns to disenfranchise voters in battleground states. And it can include the upsetting and increasing threats made to election officials and volunteers across the country. But there’s an even broader category of election interference that is of particular importance to this podcast, and that’s cybersecurity. Elections in the United States rely on a dizzying number of technologies. There are the voting machines themselves, there are electronic pollbooks that check voters in, there are optical scanners that tabulate the votes that the American public actually make when filling in an oval bubble with pen, or connecting an arrow with a solid line. And none of that is to mention the infrastructure that campaigns rely on every day to get information out—across websites, through emails, in text messages, and more. That interlocking complexity is only multiplied when you remember that each, individual state has its own way of complying with the Federal government’s rules and standards for running an election. As Cait Conley, Senior Advisor to the Director of the US Cybersecurity and Infrastructure Security Agency (CISA) explains in today’s episode: “There’s a common saying in the election space: If you’ve seen one state’s election, you’ve seen one state’s election.” How, then, are elections secured in the United States, and what threats does CISA defend against? Today, on the Lock and Code podcast with host David Ruiz, we speak with Conley about how CISA prepares and trains election officials and volunteers before the big day, whether or not an American’s vote can be “hacked,” and what the country is facing in the final days before an election, particularly from foreign adversaries that want to destabilize American trust. ”There’s a pretty good chance that you’re going to see Russia, Iran, or China try to claim that a distributed denial of service attack or a ransomware attack against a county is somehow going to impact the security or integrity of your vote. And it’s not true.” Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
On the internet, you can be shown an online ad because of your age, your address, your purchase history, your politics, your religion, and even your likelihood of having cancer . This is because of the largely unchecked “data broker” industry. Data brokers are analytics and marketing companies that collect every conceivable data point that exists about you, packaging it all into profiles that other companies use when deciding who should see their advertisements. Have a new mortgage? There are data brokers that collect that information and then sell it to advertisers who believe new homeowners are the perfect demographic to purchase, say, furniture, dining sets, or other home goods. Bought a new car? There are data brokers that collect all sorts of driving information directly from car manufacturers—including the direction you’re driving, your car’s gas tank status, its speed, and its location—because some unknown data model said somewhere that, perhaps, car drivers in certain states who are prone to speeding might be more likely to buy one type of product compared to another. This is just a glimpse of what is happening to essentially every single adult who uses the Internet today. So much of the information that people would never divulge to a stranger—like their addresses, phone numbers, criminal records, and mortgage payments—is collected away from view by thousands of data brokers. And while these companies know so much about people, the public at large likely know very little in return. Today, on the Lock and Code podcast with host David Ruiz, we speak with Cody Venzke, senior policy counsel with the ACLU, about how data brokers collect their information, what data points are off-limits (if any), and how people can protect their sensitive information, along with the harms that come from unchecked data broker activity—beyond just targeted advertising. “We’re seeing data that’s been purchased from data brokers used to make decisions about who gets a house, who gets an employment opportunity, who is offered credit, who is considered for admission into a university.” Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
Online scammers were seen this August stooping to a new low—abusing local funerals to steal from bereaved family and friends. Cybercrime has never been a job of morals (calling it a “job” is already lending it too much credit), but, for many years, scams wavered between clever and brusque. Take the “Nigerian prince” email scam which has plagued victims for close to two decades. In it, would-be victims would receive a mysterious, unwanted message from alleged royalty, and, in exchange for a little help in moving funds across international borders, would be handsomely rewarded. The scam was preposterous but effective—in fact, in 2019, CNBC reported that this very same “Nigerian prince” scam campaign resulted in $700,000 in losses for victims in the United States. Since then, scams have evolved dramatically. Cybercriminals today willl send deceptive emails claiming to come from Netflix, or Google, or Uber, tricking victims into “resetting” their passwords. Cybercriminals will leverage global crises, like the COVID-19 pandemic, and send fraudulent requests for donations to nonprofits and hospital funds. And, time and again, cybercriminals will find a way to play on our emotions—be they fear, or urgency, or even affection—to lure us into unsafe places online. This summer, Malwarebytes social media manager Zach Hinkle encountered one such scam, and it happened while attending a funeral for a friend. In a campaign that Malwarebytes Labs is calling the “Facebook funeral live stream scam,” attendees at real funerals are being tricked into potentially signing up for a “live stream” service of the funerals they just attended. Today on the Lock and Code podcast with host David Ruiz, we speak with Hinkle and Malwarebytes security researcher Pieter Arntz about the Facebook funeral live stream scam, what potential victims have to watch out for, and how cybercriminals are targeting actual, grieving family members with such foul deceit. Hinkle also describes what he felt in the moment of trying to not only take the scam down, but to protect his friends from falling for it. “You’re grieving… and you go through a service and you’re feeling all these emotions, and then the emotion you feel is anger because someone is trying to take advantage of friends and loved ones, of somebody who has just died. That’s so appalling” Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
On August 15, the city of San Francisco launched an entirely new fight against the world of deepfake porn—it sued the websites that make the abusive material so easy to create. “Deepfakes,” as they’re often called, are fake images and videos that utilize artificial intelligence to swap the face of one person onto the body of another. The technology went viral in the late 2010s, as independent film editors would swap the actors of one film for another—replacing, say, Michael J. Fox in Back to the Future with Tom Holland. But very soon into the technology’s debut, it began being used to create pornographic images of actresses, celebrities, and, more recently, everyday high schoolers and college students. Similar to the threat of “revenge porn,” in which abusive exes extort their past partners with the potential release of sexually explicit photos and videos, “deepfake porn” is sometimes used to tarnish someone’s reputation or to embarrass them amongst friends and family. But deepfake porn is slightly different from the traditional understanding of “revenge porn” in that it can be created without any real relationship to the victim. Entire groups of strangers can take the image of one person and put it onto the body of a sex worker, or an adult film star, or another person who was filmed having sex or posing nude. The technology to create deepfake porn is more accessible than ever, and it’s led to a global crisis for teenage girls. In October of 2023, a reported group of more than 30 girls at a high school in New Jersey had their likenesses used by classmates to make sexually explicit and pornographic deepfakes. In March of this year, two teenage boys were arrested in Miami, Florida for allegedly creating deepfake nudes of male and female classmates who were between the ages of 12 and 13. And at the start of September, this month, the BBC reported that police in South Korea were investigating deepfake pornography rings at two major universities. While individual schools and local police departments in the United States are tackling deepfake porn harassment as it arises—with suspensions, expulsions, and arrests—the process is slow and reactive. Which is partly why San Francisco City Attorney David Chiu and his team took aim at not the individuals who create and spread deepfake porn, but at the websites that make it so easy to do so. Today, on the Lock and Code podcast with host David Ruiz, we speak with San Francisco City Attorney David Chiu about his team’s lawsuit against 16 deepfake porn websites, the city’s history in protecting Californians, and the severity of abuse that these websites offer as a paid service. “At least one of these websites specifically promotes the non-consensual nature of this. I’ll just quote: ‘Imagine wasting time taking her out on dates when you can just use website X to get her nudes.’” Tune in today. You can also find us on Apple Podcasts , Spotify , and Google Podcasts , plus whatever preferred podcast platform you use. For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog . Show notes and credits: Intro Music: “Spellbound” by Kevin MacLeod ( incompetech.com ) Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/ Outro Music: “Good God” by Wowa (unminus.com) Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it. Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners .…
Welkom op Player FM!
Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.