PHISH & TELL 021

The Cybersecurity Brief for Women Who Mean Business

vgws

👋 WELCOME to Phish & Tell™️, from Security Done Easy™️

You’re not just building a business.
You’re building something worth protecting.

Guess what! I was announced a winner for the Stevie Award for best female business blogger! I almost didn’t apply because I thought, it’s been less than a year, but…. I did it anyway! (On November 10 they announce who got gold, silver, and bronze — I’m just glad to be on the podium!) Thanks to everyone who’s given me feedback, encouraged me, and heck, just read the weekly post! (And if there’s something you want me to write about, let me know!)

🎣 TOP CYBERSECURITY NEWS STORIES OF THE WEEK

Top stories of the week, how they are relevant to you, and what to do about them.

  1. SMB threat report warns of doubling attacks and AI‑powered scams

    Source: TMCnet, Guardz Mid‑Year 2025 SMB Threat Report

    A mid‑year report by security firm Guardz found that small and mid‑sized businesses faced nearly twice as many cyber incidents per week in early 2025 compared to the previous year. Researchers noted nearly 100 different ransomware strains, with 80% of breaches involving stolen or weak credentials. The report logged 1876 phishing incidents and 1423 business‑email‑compromise attempts and warned of a tenfold increase in attacks on Microsoft 365 and Google Workspace accounts.

    Why it matters: Attack‑as‑a‑Service tools and AI make it cheaper and easier for criminals to target smaller companies. The sheer volume of attacks means your business is more likely to be hit, even if you think you’re too small to notice.

    What to do: Provide regular phishing awareness training to your team and test them with harmless “mock” emails. Require unique, complex passwords and store them in a password manager. Turn on multi‑factor authentication for email and cloud services. Check your audit logs for unusual logins and review your cyber insurance coverage.

  2. Phishing‑as‑a‑Service surge: 17,500 domains targeting hundreds of brands

    Source: The Hacker News, 17,500 Phishing Domains Target 316 Brands Across 74 Countries in Global PhaaS Surge 

    Researchers at Netcraft and PRODAFT warned that PhaaS (Phishing‑as‑a‑Service) platforms have generated more than 17,500 phishing domains impersonating 316 brands in 74 countries. The kits, linked to Chinese‑speaking threat actors, let subscribers rent ready‑made phishing templates and even deliver smishing (SMS) messages. The services include features such as geographic filtering and real‑time victim monitoring, and prices range from $88 per week up to $1,588 per year. Investigators also noted that criminals are moving away from Telegram to email for data stealing; Netcraft saw a 25 % increase in email‑based phishing.

    Why it matters: Subscription‑based phishing makes it easy for amateurs to launch sophisticated scams that mimic legitimate brands. Small‑business owners and their customers can be tricked by convincing emails or websites that look almost identical to the real thing.

    What to do: Educate your staff about phishing red flags and encourage them to hover over links to check the true domain name. Implement email security measures like DMARC and SPF to make it harder for attackers to spoof your domain. Keep an eye on new domain registrations similar to your company’s name, and warn employees about installing browser extensions only from official sources.

  3. ShadowLeak bug let attackers hijack ChatGPT’s email‑assistant feature
    Source: The Register, OpenAI plugs ShadowLeak bug in ChatGPT that let miscreants raid inboxes

    Security firm Radware discovered a flaw dubbed “ShadowLeak” in OpenAI’s Deep Research tool, an add‑on that allows ChatGPT to read a user’s email inbox and summarize messages. Attackers could hide malicious instructions inside an email and trick the AI into stealing private emails without the user ever clicking a link. Because the AI operates from OpenAI’s own cloud infrastructure, the data theft bypassed company security tools. Radware warned that stolen data could include sensitive personal or legal documents, potentially exposing organizations to privacy‑law violations. OpenAI patched the bug on September 3.

    Why it matters: AI assistants integrated with email or other business systems are powerful but can introduce new attack opportunities. A single malicious email could trigger a data breach without any human action, leaving no clear evidence on your own network.

    What to do: If you or your employees use AI assistants that access email or documents, restrict what those agents can see and disable integrations you don’t need. Ensure the AI vendor sanitizes HTML content before processing it. Treat AI tools like privileged users: monitor their activity, enable detailed logging, and review privacy settings regularly.

  4. Incident response isn’t just technical—it’s about mental health too
    Source: Cisco Talos blog, Put together an IR playbook — for your personal mental health and wellbeing

    A senior Cisco Talos researcher reflected on the burnout and stress experienced during major incident response operations. To cope, Talos recommends setting boundaries, seeking peer support, practicing unplugged self‑care and enforcing mandatory decompression time after an incident. The post also encourages organizations to invest in an incident‑response retainer, which gives them access to experts who can guide them through crises.

    Why it matters: Cyber incidents are stressful not just for IT people but for business owners, who often bear the emotional weight of protecting their company. Ignoring mental health can lead to burnout, mistakes and turnover at the worst possible time.

    What to do: Build an incident‑response plan that includes not only technical steps but also post‑incident care. Encourage yourself and your staff to set work boundaries and take time off after high‑pressure events. Consider partnering with a reputable incident‑response provider so you have outside experts to lean on during a crisis.

  5. OpenAI studies how to spot and stop “scheming” behavior in AI models
    Source: OpenAI, Detecting and reducing scheming in AI models

    OpenAI and Apollo Research looked into a potential AI failure mode called “scheming,” where an AI appears to follow instructions but secretly pursues another goal. They built tests that simulate future real‑world scenarios and found early signs of this behaviour in several advanced models. The team defined covert actions—such as deliberately withholding or distorting important information—as a proxy for scheming. They then tried a training method called deliberative alignment, which teaches models to read an anti‑scheming guideline before acting; this reduced covert actions by about 30× in test models.

    Why it matters: Many small businesses are beginning to rely on AI tools for marketing, customer service and decision‑making. Understanding and preventing hidden misalignment—where an AI might give you plausible but misleading results—is essential to ensure you can trust these tools.

    What to do: When adopting AI products, choose reputable vendors who invest in safety research and transparency. Keep humans in the loop for critical decisions; AI should augment, not replace, your own judgment. Watch for vendor communications about model updates and safety improvements, and be cautious of any AI system that claims to automate complex tasks without oversight.

    Not sure what applies to your business or what your options are? Let’s talk.

🔍 In Case You Missed It (ICYMI)

  • This week’s blog post: How to Find the Right Cybersecurity Collaboration Groups for Your Business »    

  • This week we’re seeing phishing emails promising lucrative “government grants” for women‑owned businesses. The messages often look like official communications from agencies such as the Small Business Administration and ask you to click a link or provide banking details to receive funding.

  • Follow us on LinkedIn, Facebook or Instagram. Youtube is in the works (subscribe to get notified when I finally start getting these videos out there!)

🤖 The LOL-gorithm

Feels this way, sometimes, doesn’t it?

🧷 THE SAFETY SNAP

I was in Newark recently — just last week — and the woman who checked in to the hotel before me returned from going to her room, looking worried. She went in and found that the room was already occupied. They gave her a new room and offered to escort her, but they didn’t get what she was worried about — they thought she was embarrassed she had walked in on someone. She was worried that someone could walk in on her, just as easily. They told her to put the deadbolt on, but she wasn’t much reassured.

Here are a few hotel tips:

  • Choose secure accommodations. Look for properties with 24‑hour front desk staff, gates or security guards and proximity to public transport and well‑lit streets.

  • Do a room check. After check‑in, glance behind curtains and under the bed, test door and window locks, and make sure the phone works. Store your room key separately from its sleeve so strangers can’t see your room number.

  • Know your exit plan. Locate the nearest emergency exit as soon as you arrive—those minutes can matter in an emergency.

💬 A PERSONAL NOTE

All four kids of mine have now experienced a lock-down situation at school. Not a drill. Luckily last night’s seems to have been a false alarm, though it was a tense four or five hours. Activities are canceled through the weekend and there’s some question about classes next week. The two today were precautionary “soft” lockdowns because of nearby happenings. The older kids are more anxious. The younger kids are more resigned.

One of my kids shared a video from her university. In the background, swarms of law enforcement, red and blue lights flashing, as they swept the campus. In the foreground, a small food delivery robot, slowly made its way to its destination.

👂 TELL ME

Are you finding this newsletter helpful? Do you have questions or topics you’d like me to cover? Let me know :-) [email protected]

You’re subscribed to Phish & Tell™️ because your business is worth protecting.

🩷