#4 Symbolic AI
MYCIN (1972–1976)
Developed at Stanford for diagnosing bacterial infections and recommending antibiotics.
DENDRAL (1965–1970s)
One of the first expert systems, used for chemical structure analysis.
XCON (1980s, DEC)
Used by Digital Equipment Corporation to configure computer systems.
CLIPS (1980s–1990s)
NASA’s rule-based expert system shell.
Distributed Artificial Intelligence: Theory and Praxis (Eurocourses: Computer and Information Science, 5) 1993rd Edition
by Nicholas M. Avouris (Editor), Les Gasser (Editor)
- Carl Hewitt introduced the Actor Model (1973)
[2018–2020]
Foundational Models (BERT, GPT-2)
- Pretrained transformers
- Context-aware, but prompt-reliant
- No memory, no decision-making
#7 Agents
Autonomy – Operates without human intervention.
Goal-oriented
Interactive – Takes input from the environment (e.g., sensors, data streams).
Adaptable - Some agents adapt over time via machine learning.
Learn
Modular
ES
Rule-based or logic-driven.
Non-learning by default (static knowledge base).
Transparent decision-making (traceable steps).
Suitable for domains with well-established rules.
#8 🧩 1. Perception (Sensing Layer)
Purpose: Understand the environment.
Inputs: Text, speech, images, structured data, API responses.
Tools: NLP engines, OCR, speech-to-text, web scrapers, database connectors.
Role: Converts raw data into internal representations.
🧠 2. Reasoning and Decision Making
Purpose: Derive meaning and choose actions.
Components:
Logic engine: Applies rules, constraints, and policies.
Knowledge base: Stores domain knowledge and learned patterns.
Memory modules: Short-term (session) and long-term (contextual history).
Tools: LLMs, symbolic reasoning engines, vector databases (e.g., Pinecone, FAISS).
🧭 3. Planning and Goal Management
Purpose: Break down tasks and formulate strategies.
Planner module: Creates workflows from goals.
Task manager: Assigns sub-tasks to appropriate agents or tools.
Often uses frameworks like LangChain, AutoGen, or CrewAI.
🧑💻 4. Action Execution (Actuation Layer)
Purpose: Perform tasks in digital or physical environments.
Includes:
API wrappers (e.g., call calendar, payment gateway).
UI automation (e.g., RPA tools).
Tool interfaces (e.g., sending emails, editing documents).
May include tool-use modules that interpret LLM output into executable commands.
🔄 5. Learning and Adaptation
Purpose: Improve performance and decision-making.
Mechanisms:
Reinforcement learning
Feedback integration
Performance analytics
Example: Adjusting how it responds to a user based on past preferences.
🕸️ 6. Coordination and Multi-Agent Orchestration
Purpose: Manage collaboration between agents.
Manager agent: Delegates roles to specialized agents.
Communication layer: Handles inter-agent messaging and status updates.
Ensures synchronization, avoids redundancy, and resolves conflicts.
🛡️ 7. Governance, Safety, and Human Oversight
Purpose: Ensure control, trust, and alignment.
Includes:
Output validation
Hallucination filters
Role-based access controls
Audit logs
Enables human-in-the-loop systems for final validation or escalation.
📦 8. Infrastructure and Integration Layer
Purpose: Connect with external systems and scale reliably.
Cloud services (e.g., AWS, Azure)
Data pipelines and APIs
Monitoring dashboards and deployment tools
Model lifecycle management (MLOps / GenAIOps)
#9 Infinite feedback loops
The convenience of the hands-off reasoning for human users using AI agents also comes with its risks. Agents that are unable to create a comprehensive plan or reflect on their findings, may find themselves repeatedly calling the same tools, invoking infinite feedback loops. To avoid these redundancies, some level of real-time human monitoring may be used.
#10 Transparency and Explainability: AI agents should operate in a way that users can understand.
Clear decision-making logic builds trust and allows oversight.
Bias and Fairness: Training data must be diverse and inclusive to avoid discrimination. Biased AI agents can lead to unfair outcomes.
Accountability and Responsibility: Humans must remain accountable for AI decisions. There should be clear ownership of the agent’s actions and consequences.
Privacy and Data Security: Sensitive data used by AI agents must be protected. Agents must comply with data protection laws and ethical data use practices.
#13 Key Regulations and Standards in Compliance
General Data Protection Regulation (GDPR):
This is a big one when it comes to compliance for any system that handles personal data. While not specific to agentic systems, it’s a cornerstone for ensuring that any AI systems dealing with EU citizens' data are compliant.
California Consumer Privacy Act (CCPA):
Similar to GDPR, but focused on California residents. It's another great example of a regulation that you can mention as part of the compliance landscape.
ISO/IEC 38507 on AI Management:
This is an emerging international standard that provides guidance on the governance implications of AI, including compliance aspects. It’s not super specific to agentic systems, but it definitely covers AI governance.
NIST AI Risk Management Framework:
The National Institute of Standards and Technology has been developing guidelines around trustworthy AI. This includes managing bias, ensuring transparency, and maintaining compliance, which you can tie into your discussion.
#14 General Data Protection Regulation (GDPR):
This is a big one when it comes to compliance for any system that handles personal data. While not specific to agentic systems, it’s a cornerstone for ensuring that any AI systems dealing with EU citizens' data are compliant.
Lawfulness, fairness & transparency — GDPR Art. 5(1)(a) → via “GDPR” link in the menu. Data Act
Data minimisation — GDPR Art. 5(1)(c) → via “GDPR” link in the menu. Data Act
Purpose limitation — GDPR Art. 5(1)(b) → via “GDPR” link in the menu. Data Act
Accuracy — GDPR Art. 5(1)(d) → via “GDPR” link in the menu. Data Act
Storage limitation — GDPR Art. 5(1)(e) → via “GDPR” link in the menu. Data Act
Integrity & confidentiality — GDPR Art. 5(1)(f) → via “GDPR” link in the menu. Data Act
Accountability — GDPR Art. 5(2) → via “GDPR” link in the menu. Data Act
#15 General Data Protection Regulation (GDPR):
This is a big one when it comes to compliance for any system that handles personal data. While not specific to agentic systems, it’s a cornerstone for ensuring that any AI systems dealing with EU citizens' data are compliant.
Lawfulness, fairness & transparency — GDPR Art. 5(1)(a) → via “GDPR” link in the menu. Data Act
Data minimisation — GDPR Art. 5(1)(c) → via “GDPR” link in the menu. Data Act
Purpose limitation — GDPR Art. 5(1)(b) → via “GDPR” link in the menu. Data Act
Accuracy — GDPR Art. 5(1)(d) → via “GDPR” link in the menu. Data Act
Storage limitation — GDPR Art. 5(1)(e) → via “GDPR” link in the menu. Data Act
Integrity & confidentiality — GDPR Art. 5(1)(f) → via “GDPR” link in the menu. Data Act
Accountability — GDPR Art. 5(2) → via “GDPR” link in the menu. Data Act
#16 Here’s a crisp, up-to-date snapshot of the CCPA (as amended by the CPRA) in “principles” form:
Transparency & notice
Businesses must tell consumers—at or before collection—what categories of personal information (PI) they collect, for what purposes, and whether the PI will be “sold” or “shared.” Privacy policies must reflect this. privacy.gtlaw.com
Strong consumer rights
Californians can:
• Know/access the PI a business collected about them
• Delete PI (with certain exceptions)
• Opt out of the sale or sharing of PI (including cross-context behavioral advertising)
• Correct inaccurate PI
• Limit the use/disclosure of “sensitive PI” (e.g., SSN, precise geolocation)
• Be free from discrimination for exercising these rights
These rights are explicit in the AG/CPPA guidance. California DOJ+2California Privacy Protection Agency+2
Easy opt-out, including Global Privacy Control (GPC)
Covered businesses must provide simple opt-out methods and honor user-enabled GPC signals as a valid “do not sell or share” request. California DOJ+1
Data minimization, purpose & storage limits
Collection, use, retention, and sharing must be “reasonably necessary and proportionate” to the disclosed purposes (and not incompatible with them). Regulations reinforce this purpose-limitation/storage-limitation standard. California.Public.Law+1
Contracts with recipients of PI
Transfers to service providers/contractors/third parties require specific contracts that bind them to CCPA standards and restrict further use. GDPR Local
Special treatment for children’s data
Sale or sharing of PI from consumers under 16 requires opt-in (parental consent under 13); penalties are higher for violations involving minors. California DOJ
Security & enforcement
The law is enforced by the California Privacy Protection Agency and the Attorney General. Consumers have a limited private right of action for certain data breaches tied to inadequate security. Penalties can be significant per violation. California Privacy Protection Agency+1
Who must comply (scope)
For-profit entities doing business in CA that meet at least one threshold—e.g., >$25M (indexed; $26.625M for 2025), buy/sell/share PI of ≥100,000 consumers/households, or derive ≥50% of revenue from selling/sharing PI—are covered. Location doesn’t matter if they meet the thresholds. Fisher Phillips+1
#17 https://www.joneswalker.com/en/insights/blogs/ai-law-blog/ai-regulatory-update-californias-sb-243-mandates-companion-ai-safety-and-accoun.html?id=102lq7c
1. Clear disclosure when users could think they’re chatting with a human.
2. Self-harm safeguards: maintain protocols to prevent harmful outputs and surface crisis resources when risk signals appear.
3. Extra protections for known minors: default reminders every ~3 hours that the bot isn’t human; measures to avoid sexual content or prompts.
4. Annual reporting (from July 1, 2027): metrics + methods to the Office of Suicide Prevention; state posts aggregate data.
5. Enforcement: private right of action; damages are the greater of actuals or $1,000 per violation, plus fees.
6. Not covered: traditional customer-service/ops bots, analytics tools, narrow game NPCs, or voice assistants that don’t sustain relationships.
#18 LLM (Reasoner) — the core policy that plans, decides, and generates actions.
Tools / Skills — external functions/APIs the agent can call (code exec, web, DB, Slack, etc.).
Tool Router / Action Executor — safely validates arguments, runs tools, handles failures/timeouts.
Planner / Decomposer — turns goals into subgoals, steps, or workflows.
Memory
• Short-term (scratchpad/state for current task)
• Long-term (vector DB / key-value facts, episodic logs)
• Procedural (what worked before; skills/results)
Retrieval / Knowledge Layer — RAG over docs, code, KBs, internet.
Perception & I/O Adapters — interfaces to users and environments (CLI, UI, sensors, files).
State Manager — keeps the loop context: goals, constraints, step history, tool results.
Critic / Reflection — self-evaluation, hallucination checks, error analysis, result grading.
Guardrails & Policy — safety filters, authZ/authN, rate limits, PII handling, compliance.
Orchestrator / Controller — runs the agent loop (perceive → plan → act → observe → revise), scheduling and retries.
Learning / Improvement — updates prompts, tool selection policies, or fine-tunes models from feedback.
Monitoring & Logs — traces, metrics, cost/latency, observability (for debugging & eval).
Multi-Agent Coordination (optional) — roles, messaging, blackboards, or marketplaces between agents.
Sandbox / Execution Environment (optional) — isolated code runner, notebooks, simulators.
#19 LLM (Reasoner) — the core policy that plans, decides, and generates actions.
Tools / Skills — external functions/APIs the agent can call (code exec, web, DB, Slack, etc.).
Tool Router / Action Executor — safely validates arguments, runs tools, handles failures/timeouts.
Planner / Decomposer — turns goals into subgoals, steps, or workflows.
Memory
• Short-term (scratchpad/state for current task)
• Long-term (vector DB / key-value facts, episodic logs)
• Procedural (what worked before; skills/results)
Retrieval / Knowledge Layer — RAG over docs, code, KBs, internet.
Perception & I/O Adapters — interfaces to users and environments (CLI, UI, sensors, files).
State Manager — keeps the loop context: goals, constraints, step history, tool results.
Critic / Reflection — self-evaluation, hallucination checks, error analysis, result grading.
Guardrails & Policy — safety filters, authZ/authN, rate limits, PII handling, compliance.
Orchestrator / Controller — runs the agent loop (perceive → plan → act → observe → revise), scheduling and retries.
Learning / Improvement — updates prompts, tool selection policies, or fine-tunes models from feedback.
Monitoring & Logs — traces, metrics, cost/latency, observability (for debugging & eval).
Multi-Agent Coordination (optional) — roles, messaging, blackboards, or marketplaces between agents.
Sandbox / Execution Environment (optional) — isolated code runner, notebooks, simulators.