Cybersecurity, Privacy & AI Threats
Cybersecurity, Privacy & AI Threats
As organizations adopt AI-driven tools, the landscape of cybersecurity and privacy risks continues to expand. Threat actors now exploit AI for more convincing phishing campaigns, while businesses face new challenges in protecting sensitive data across machine learning pipelines and automated workflows. A strong foundation in cybersecurity principles, paired with hands-on awareness of evolving AI threats, is critical for every modern analyst and manager.
Objectives
By the end of this module, students will be able to:
- Recognize social-engineering patterns and identify AI-enhanced phishing attempts.
- Analyze privacy risks in AI workflows and draft a data minimization plan.
- Compare and evaluate basic security controls including multi-factor authentication (MFA), data loss prevention (DLP), encryption, and logging.
Lecture & Discussion
The lecture introduces the CIA triad (confidentiality, integrity, availability) as the core of information security, then explores zero trust architecture, where verification is required at every access point. Students will examine issues of data retention and why keeping sensitive records longer than necessary increases exposure. A forward-looking segment addresses synthetic media and deepfakes, highlighting how generative AI blurs the line between authentic and fabricated content. Ethical implications and organizational responsibilities are emphasized.
Hands-On Exercise
Students will engage in a red-team/blue-team exercise: the red team uses AI tools to craft phishing messages with high believability (e.g., urgent emails mimicking IT or HR), while the blue team works to detect and mitigate these threats. Both sides will then collaborate to draft mitigation strategies, such as user awareness training, MFA enforcement, and anomaly detection. This exercise fosters an understanding of the adversarial dynamics that shape modern security practices.
Lab: Security Posture Review
The lab deliverable is a “Security Posture Review”, consisting of:
- A threat table mapping potential attack vectors (e.g., phishing, data exfiltration, insider misuse).
- A control map showing where defenses like encryption, DLP, and logging apply.
- A short policy snippet on data minimization, written in plain English for inclusion in an organizational handbook.
Lesson Summary
Organizations are facing an expanding landscape of cybersecurity and privacy risks as they incorporate AI-driven tools. Threat actors are leveraging AI for more convincing phishing campaigns, while businesses encounter challenges in safeguarding sensitive data through machine learning pipelines and automated workflows.
Key points covered in the module include:
- Recognizing social-engineering patterns and detecting AI-enhanced phishing attempts.
- Analysing privacy risks in AI workflows and creating a data minimization plan.
- Comparing and assessing basic security controls such as multi-factor authentication (MFA), data loss prevention (DLP), encryption, and logging.
The lecture emphasizes the importance of the CIA triad (confidentiality, integrity, availability) in information security, introduces zero trust architecture, and discusses data retention issues. It also explores synthetic media and deepfakes, outlining the ethical implications and organizational responsibilities related to generative AI.
Students will participate in a red-team/blue-team exercise where the red team crafts believable phishing messages using AI tools, and the blue team detects and counters these threats. Subsequently, both teams collaborate on mitigation strategies like user awareness training, MFA enforcement, and anomaly detection.
The lab assignment involves a "Security Posture Review" with deliverables including a threat table mapping potential attack vectors, a control map illustrating defense mechanisms like encryption and logging, and a short policy snippet on data minimization written in accessible language for inclusion in an organizational handbook.
Policy Snippet Example (Data Minimization)
“Our institution collects only the data necessary to deliver academic and administrative services. Sensitive information (such as SSNs, payment card details, or health records) must not be stored in AI systems or third-party apps unless explicitly authorized and encrypted. Data older than 3 years must be securely archived or deleted.”


0 comments