Page cover

gear-complex-codeBuild AI Agents: Homework

This use case explores the foundational concepts behind designing, evaluating, and deploying AI agents in business workflows. The assignment required analyzing how agents process tasks, make decisions, follow constraints, and integrate into real organizational settings. I focused on understanding the difference between simple prompting, system role design, tool-enhanced agents, and multi-step reasoning frameworks.

This homework directly supports the CIDM 6096 objectives related to strategic AI adoption, workflow clarity, and building organizational AI readiness.


Overview of the Assignment

In this homework, I examined:

  • What makes an AI agent “agentic”

  • How tasks are decomposed into steps

  • Where AI should act autonomously vs. be constrained

  • How guardrails prevent errors

  • How agents integrate with human workflows

  • Example tasks that benefit from agent design

  • Risks, failure modes, and ethical considerations

The focus was on practical business applications, including:

  • Customer service triage

  • Document drafting

  • Research automation

  • Structured data extraction

  • Compliance-aligned workflows

  • Decision support systems


Key Concepts Learned

1. Agents Need Role, Purpose, and Boundaries

An effective AI agent must have:

  • A clearly defined role

  • A scope of work

  • Guardrails and non-permissions

  • Specific formats and reasoning steps

Without this clarity, agents hallucinate or generalize too broadly.


2. Agentic Behavior Requires Multi-Step Reasoning

A prompt like “Summarize this” is not an agent.

An agent requires:

  • Goals

  • Sub-tasks

  • Step sequences

  • Validation or checking

  • Completion criteria

This aligns with modern agent frameworks like AgentKit, LangChain, and Anthropic’s ReAct-style agents.


3. Human Oversight Is Non-Negotiable

The homework emphasized that even well-designed agents:

  • Can misinterpret ambiguous inputs

  • Need human review for high-stakes actions

  • Should default to clarifying questions when uncertain


4. Tool Use Expands Capability (But Adds Risk)

Tool-enabled agents (e.g., API callers, calculators, search tools):

  • Expand decision-making

  • Allow complex workflows

  • Introduce operational risk

  • Require constraints on when/why tools are used


5. Strong Prompts Strengthen Agent Behavior

I experimented with prompts that included:

  • Role definition

  • Step-by-step process instructions

  • Input/Output requirements

  • Safety constraints

  • Examples for pattern learning

These greatly improved stability and reliability.


Example: My Agent Specification Structure

Below is the structure I used to design better agents in this homework:

This structure later informed my custom GPT work (e.g., DME Coverage Decoder).


🤖 Agent Behavior in Action (Evidence)

I tested this structure by simulating a Customer Service Agent.

Prompt: Act as the agent defined above. A customer is asking for a refund outside the 30-day window.

Evidence of Work:


Why This Use Case Matters

This homework strengthened my ability to:

  • Break down complex workflows

  • Identify which parts AI can perform reliably

  • Design agent roles that support real organizational tasks

  • Anticipate risks, failures, and operational edge cases

  • Build resources others can use to evaluate agent-based systems

It demonstrates thoughtful, safe, strategic use of GenAI — not just surface-level experimentation.


How This Connects to My Portfolio

The lessons from this assignment show up across multiple areas of my portfolio:

  • DME Coverage Decoder GPT → Uses strong role, constraints, and output formatting

  • Workflow Vendor Comparison → Evaluates tools that power agentic workflows

  • Translation + Patient Education → Demonstrates structured reasoning

  • TeleCare Triage Thinking → Applies agentic logic to patient intake workflows

This homework forms the backbone of how I now think about building reliable, safe, and useful AI systems.