InfosecTrain Hosts 2-Day LLM Security & Red Teaming Masterclass

InfosecTrain Hosts 2-Day LLM Security & Red Teaming Masterclass
 
BANGALORE, India - Oct. 6, 2025 - PRLog -- What
InfosecTrain, a leading cybersecurity training provider, is hosting a 2-Day  LLM Security & Red Teaming Masterclass. This masterclass will be a two-day intensive program focused on the security and adversarial evaluation of modern AI systems, particularly large language models (LLMs).

When
01-02 November 2025 (Sat-Sun)
7:00 PM - 11:00 PM (IST)

Why Attend
Attending the  LLM Security & Red Teaming Masterclass will provide a unique opportunity to bridge the gap between AI development and cybersecurity expertise. Participants will gain a deep understanding of the practical challenges and vulnerabilities associated with large language models, including prompt injection, data exfiltration, and model poisoning. The program will also introduce advanced frameworks and tools for evaluating AI security, allowing attendees to simulate real-world attacks and strengthen defenses. By exploring the full AI lifecycle, from model development to deployment, participants will learn how to identify weak points, implement safeguards, and ensure responsible use of AI technologies.

Agenda (Two Days of Transformative Learning)

DAY 1: Introduction to AI and LLM Security by Avnish (7 PM - 11 PM)
  • Demystifying the core concepts and components of an AI system
  • Types of AI Systems: Machine Learning, Deep Learning, Generative AI, Agentic AI
  • Building and deploying AI - Model Development Lifecycle
  • Understanding LLMs: Transformer Architecture, Pre-training and Fine Tuning
  • LLM Applications: Chatbots, Code Generation, Cybersecurity Use Cases
  • AI and GenAI Frameworks: Scikit-learn, Tensorflow, AutoML, Hugging Face, LangChain, Llamaindex, OpenAI • • API, Ollama, LMStudio
  • Security Considerations while Developing and Deploying AI Systems

DAY 2: AI and LLM Red Teaming by Ashish (7 PM - 11 PM)
  • Introduction to AI Red Teaming – What is it and why it is needed?
  • Attack Families for AI Red Teaming: Poisoning, Injection, Evasion, Extraction, Availability, Supply Chain
  • LLM01: Prompt Injection – Direct and Indirect
  • LLM02: Sensitive Information Disclosure – Data exfiltration
  • LLM03: Supply Chain – Malicious Packages and Models
  • LLM04: Data and Model Poisoning – Poisoning datasets and models during training and fine-tuning
  • LLM05: Improper Output Handling – Injection via model outputs
  • LLM06: Excessive Agency – Agents with dangerous privileges
  • LLM07: System Prompt Leakage – Exposing hidden system instructions through crafted queries
  • LLM08: Vector and Embedding Weaknesses
  • LLM09: Misinformation – Detecting Hallucinations
  • LLM10: Unbounded Consumption – Resource abuse and DOS Attacks
  • Tools and Frameworks for LLM Red Teaming: Cleverhans, Foolbox, Adversarial Robustness Toolbox
Registration Link: https://www.infosectrain.com/pages/lp/llm-masterclass/

About InfosecTrain
To know more about training programs offered by InfosecTrain:
Please write back to sales@infosectrain.com or call at IND: 1800-843-7890 (Toll-Free) / US: +1 657-221-1127 / UAE: +971 569-908-131

Contact
InfosecTrain
social@infosectrain.com
18008437890
End
Source: » Follow
Email:***@infosectrain.com Email Verified
Tags:Llm
Industry:Education
Location:Bangalore - Karnataka - India
Subject:Events
Account Email Address Verified     Account Phone Number Verified     Disclaimer     Report Abuse
Infosec Train PRs
Trending News
Most Viewed
Top Daily News



Like PRLog?
9K2K1K
Click to Share