Table of Contents
Introduction to Module 7
7.1 Case Study: E-Commerce Recommendation Engine
7.1.1 Understanding the Problem
7.1.2 Prompt Design
7.1.3 Implementation with Code
7.1.4 Real-Life Example
7.2 Case Study: Legal Document Analysis
7.2.1 Understanding the Problem
7.2.2 Prompt Design
7.2.3 Implementation with Code
7.2.4 Real-Life Example
7.3 Case Study: Creative Writing Assistant
7.3.1 Understanding the Problem
7.3.2 Prompt Design
7.3.3 Implementation with Code
7.3.4 Real-Life Example
7.4 Hands-On Project: Building a Personal AI Assistant
7.4.1 Project Overview
7.4.2 Step-by-Step Implementation
7.4.3 Code Implementation
7.4.4 Testing and Deployment
7.5 Real-Life Integration: Healthcare and Finance Sectors
7.5.1 Healthcare Applications
7.5.2 Finance Applications
7.5.3 Code Examples
7.6 Code Snippet: Full Project Implementation
7.7 Best Practices in Case Studies
7.8 Exception Handling in Projects
7.9 Pros, Cons, and Alternatives
Conclusion
Introduction to Module 7
Welcome to Module 7 of the AI Prompt Engineering: Complete Course Outline. This module dives deep into real-world applications of AI prompt engineering, showcasing practical case studies and hands-on projects. We’ll explore how to design prompts for e-commerce recommendation engines, legal document analysis, and creative writing assistants. You’ll also build a personal AI assistant and learn how AI integrates into healthcare and finance sectors. With detailed code examples, best practices, and exception handling, this module equips you with the skills to apply AI in real-life scenarios.
7.1 Case Study: E-Commerce Recommendation Engine
7.1.1 Understanding the Problem
E-commerce platforms rely on recommendation engines to suggest products, increasing user engagement and sales. The challenge is to generate personalized product suggestions based on user behavior, preferences, and purchase history. AI prompt engineering can enhance these systems by crafting prompts that guide large language models (LLMs) to analyze user data and produce relevant recommendations.
Key Requirements:
Analyze user browsing history, purchase patterns, and preferences.
Generate product suggestions with high relevance.
Handle edge cases like new users or limited data.
7.1.2 Prompt Design
Effective prompts for an e-commerce recommendation engine must be clear, context-rich, and structured to elicit precise outputs. Here’s an example prompt:
Prompt Example:
You are an AI recommendation engine for an e-commerce platform. Given the following user data:
- User ID: 12345
- Recent Purchases: [Laptop, Wireless Mouse]
- Browsing History: [Gaming Headset, External Hard Drive, USB-C Cable]
- Preferences: [Tech, Gadgets]
Suggest 3 products with their names, categories, and reasons why they suit the user. Format the output as a JSON object.
Expected Output:
{
"recommendations": [
{
"product_name": "Gaming Keyboard",
"category": "Tech",
"reason": "Complements the user's recent purchase of a gaming headset."
},
{
"product_name": "SSD Drive",
"category": "Storage",
"reason": "Aligns with the user's interest in external hard drives."
},
{
"product_name": "Monitor Stand",
"category": "Accessories",
"reason": "Enhances the user's laptop setup for better ergonomics."
}
]
}
7.1.3 Implementation with Code
Let’s implement a Python script that uses an LLM (e.g., Grok 3 via xAI API) to generate product recommendations. We’ll assume access to a mock user database.
import requests
import json
# Mock user data
user_data = {
"user_id": "12345",
"recent_purchases": ["Laptop", "Wireless Mouse"],
"browsing_history": ["Gaming Headset", "External Hard Drive", "USB-C Cable"],
"preferences": ["Tech", "Gadgets"]
}
# Prompt for the LLM
prompt = f"""
You are an AI recommendation engine for an e-commerce platform. Given the following user data:
- User ID: {user_data['user_id']}
- Recent Purchases: {', '.join(user_data['recent_purchases'])}
- Browsing History: {', '.join(user_data['browsing_history'])}
- Preferences: {', '.join(user_data['preferences'])}
Suggest 3 products with their names, categories, and reasons why they suit the user. Format the output as a JSON object.
"""
# Function to call the LLM API (replace with actual xAI API endpoint)
def get_recommendations(prompt):
try:
response = requests.post(
"https://api.x.ai/grok3", # Hypothetical endpoint
headers={"Authorization": "Bearer YOUR_API_KEY"},
json={"prompt": prompt}
)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
return {"error": f"API request failed: {str(e)}"}
# Execute and print recommendations
result = get_recommendations(prompt)
print(json.dumps(result, indent=2))
Explanation:
The script constructs a prompt using user data.
It sends the prompt to an LLM API and retrieves JSON-formatted recommendations.
Exception handling ensures the script doesn’t crash on API failures.
7.1.4 Real-Life Example
Consider an online retailer like Amazon. Their recommendation engine might suggest a phone case to a user who recently purchased a smartphone. By analyzing browsing history (e.g., views of screen protectors) and preferences (e.g., tech accessories), the engine delivers targeted suggestions. This case study mirrors Amazon’s “Customers who bought this also bought” feature, powered by AI-driven insights.
7.2 Case Study: Legal Document Analysis
7.2.1 Understanding the Problem
Legal document analysis involves extracting key information (e.g., clauses, obligations) from contracts or legal texts. AI can automate this process, saving time for legal professionals. The challenge is to design prompts that accurately identify and summarize critical elements in complex documents.
Key Requirements:
Extract specific clauses (e.g., termination, liability).
Summarize key points in plain language.
Handle ambiguous or lengthy documents.
7.2.2 Prompt Design
A well-crafted prompt ensures the LLM understands the document structure and delivers concise outputs.
Prompt Example:
You are an AI legal document analyzer. Given the following contract excerpt:
[Insert contract text here]
Extract the termination clause and summarize it in plain language. Provide the output in JSON format with fields for the clause text and summary.
Expected Output:
{
"termination_clause": "Either party may terminate this agreement with 30 days' written notice.",
"summary": "The contract can be ended by either party giving 30 days' notice in writing."
}
7.2.3 Implementation with Code
Here’s a Python script to analyze a contract using an LLM.
import requests
import json
# Mock contract excerpt
contract_excerpt = """
This agreement is effective from January 1, 2025. Either party may terminate this agreement with 30 days' written notice. All disputes shall be resolved through arbitration.
"""
# Prompt for the LLM
prompt = f"""
You are an AI legal document analyzer. Given the following contract excerpt:
{contract_excerpt}
Extract the termination clause and summarize it in plain language. Provide the output in JSON format with fields for the clause text and summary.
"""
# Function to call the LLM API
def analyze_contract(prompt):
try:
response = requests.post(
"https://api.x.ai/grok3",
headers={"Authorization": "Bearer YOUR_API_KEY"},
json={"prompt": prompt}
)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
return {"error": f"API request failed: {str(e)}"}
# Execute and print analysis
result = analyze_contract(prompt)
print(json.dumps(result, indent=2))
Explanation:
The script sends a contract excerpt to the LLM with a prompt to extract and summarize the termination clause.
The output is formatted as JSON for easy integration into legal workflows.
7.2.4 Real-Life Example
In a law firm, an AI tool like this could process hundreds of contracts daily, extracting key clauses for review. For instance, a corporate lawyer might use it to summarize termination clauses across multiple vendor agreements, reducing manual effort and ensuring consistency.
7.3 Case Study: Creative Writing Assistant
7.3.1 Understanding the Problem
Creative writing assistants help authors generate ideas, refine drafts, or create content in specific styles. The challenge is to craft prompts that produce coherent, creative, and contextually appropriate text.
Key Requirements:
Generate story ideas or continuations.
Match a specific tone or style (e.g., suspenseful, humorous).
Avoid generic or repetitive outputs.
7.3.2 Prompt Design
Prompts for creative writing must provide clear context and constraints to guide the LLM.
Prompt Example:
You are a creative writing assistant. Write a 100-word story continuation for the following prompt:
"A detective finds a mysterious letter in an old mansion."
Use a suspenseful tone and focus on vivid imagery. Format the output as plain text.
Expected Output:
The detective’s flashlight flickered, casting jagged shadows across the mansion’s dusty walls. The letter, yellowed and brittle, spoke of a hidden room behind the fireplace. His heart pounded as he traced the cold stone, fingers brushing a concealed latch. With a creak, the wall parted, revealing a narrow staircase descending into darkness. Each step echoed, the air growing colder, thicker. At the bottom, a locked chest pulsed with an eerie glow. He reached for it, but the floor trembled, and whispers filled the void—voices not of this world, calling his name.
7.3.3 Implementation with Code
Here’s a Python script to generate a story continuation.
import requests
# Story prompt
story_prompt = "A detective finds a mysterious letter in an old mansion."
# Prompt for the LLM
prompt = f"""
You are a creative writing assistant. Write a 100-word story continuation for the following prompt:
"{story_prompt}"
Use a suspenseful tone and focus on vivid imagery. Format the output as plain text.
"""
# Function to call the LLM API
def generate_story(prompt):
try:
response = requests.post(
"https://api.x.ai/grok3",
headers={"Authorization": "Bearer YOUR_API_KEY"},
json={"prompt": prompt}
)
response.raise_for_status()
return response.text
except requests.exceptions.RequestException as e:
return f"Error: API request failed: {str(e)}"
# Execute and print story
result = generate_story(prompt)
print(result)
Explanation:
The script sends a creative writing prompt to the LLM, specifying tone and length.
The output is plain text, suitable for creative writing applications.
7.3.4 Real-Life Example
A novelist might use this assistant to brainstorm plot twists or draft scenes. For example, a writer working on a thriller could input a story starter and receive a suspenseful continuation, saving time and sparking inspiration.
7.4 Hands-On Project: Building a Personal AI Assistant
7.4.1 Project Overview
This project involves building a personal AI assistant that handles tasks like scheduling, answering questions, and sending reminders. The assistant will use prompt engineering to interact with users via a web interface.
Key Features:
Task scheduling and reminders.
General knowledge question answering.
User-friendly web interface.
7.4.2 Step-by-Step Implementation
Set Up the Web Interface: Use HTML, JavaScript, and React for the frontend.
Design Prompts: Create prompts for task management and question answering.
Integrate LLM API: Connect to an LLM (e.g., Grok 3) for processing user inputs.
Test and Deploy: Ensure the assistant handles various inputs and edge cases.
7.4.3 Code Implementation
Below is a React-based web app for the personal AI assistant.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Personal AI Assistant</title>
<script src="https://cdn.jsdelivr.net/npm/react@18.2.0/umd/react.development.js"></script>
<script src="https://cdn.jsdelivr.net/npm/react-dom@18.2.0/umd/react-dom.development.js"></script>
<script src="https://cdn.jsdelivr.net/npm/babel@7.18.6/standalone/babel.min.js"></script>
<script src="https://cdn.tailwindcss.com"></script>
</head>
<body>
<div id="root"></div>
<script type="text/babel">
const { useState } = React;
function Assistant() {
const [input, setInput] = useState("");
const [response, setResponse] = useState("");
const handleSubmit = async () => {
const prompt = `
You are a personal AI assistant. Respond to the following user input:
"${input}"
If the input is a task (e.g., "Schedule a meeting"), create a task in JSON format.
If the input is a question, provide a concise answer.
`;
try {
const res = await fetch("https://api.x.ai/grok3", {
method: "POST",
headers: { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json" },
body: JSON.stringify({ prompt })
});
const data = await res.json();
setResponse(JSON.stringify(data, null, 2));
} catch (error) {
setResponse(`Error: ${error.message}`);
}
};
return (
<div className="max-w-2xl mx-auto p-4">
<h1 className="text-2xl font-bold mb-4">Personal AI Assistant</h1>
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
className="w-full p-2 border rounded mb-2"
placeholder="Ask a question or schedule a task..."
/>
<button
onClick={handleSubmit}
className="bg-blue-500 text-white p-2 rounded"
>
Submit
</button>
<pre className="mt-4 p-4 bg-gray-100 rounded">{response}</pre>
</div>
);
}
ReactDOM.render(<Assistant />, document.getElementById("root"));
</script>
</body>
</html>
Explanation:
The React app provides a simple interface for user inputs.
The prompt distinguishes between tasks and questions, returning JSON for tasks and text for answers.
Tailwind CSS ensures a clean, responsive design.
7.4.4 Testing and Deployment
Testing: Input various queries (e.g., “Schedule a meeting at 3 PM” or “What is the capital of France?”) and verify outputs.
Deployment: Host the app on a platform like Vercel or Netlify for public access.
Edge Cases: Handle empty inputs or ambiguous queries with fallback responses.
7.5 Real-Life Integration: Healthcare and Finance Sectors
7.5.1 Healthcare Applications
AI prompt engineering in healthcare can assist with patient data analysis, diagnosis support, and treatment planning. For example, an AI can summarize patient records or suggest possible diagnoses based on symptoms.
Prompt Example:
You are a healthcare AI assistant. Given the following patient data:
- Age: 45
- Symptoms: [Fever, Cough, Fatigue]
- Medical History: [Asthma, Hypertension]
Suggest possible diagnoses and recommended tests in JSON format.
Expected Output:
{
"diagnoses": [
{
"condition": "Influenza",
"probability": "High",
"tests": ["Flu swab", "Blood test"]
},
{
"condition": "Pneumonia",
"probability": "Medium",
"tests": ["Chest X-ray", "Sputum culture"]
}
]
}
7.5.2 Finance Applications
In finance, AI can analyze market trends, generate financial reports, or assess investment risks.
Prompt Example:
You are a financial AI assistant. Given the following portfolio:
- Stocks: [AAPL: $5000, TSLA: $3000]
- Bonds: [$2000]
Analyze the portfolio risk and suggest rebalancing strategies in JSON format.
Expected Output:
{
"risk_level": "Moderate",
"rebalancing_strategies": [
{
"action": "Reduce TSLA exposure",
"reason": "High volatility in tech stocks"
},
{
"action": "Increase bond allocation",
"reason": "Stabilize portfolio with safer assets"
}
]
}
7.5.3 Code Examples
Here’s a Python script for the healthcare use case.
import requests
import json
# Patient data
patient_data = {
"age": 45,
"symptoms": ["Fever", "Cough", "Fatigue"],
"medical_history": ["Asthma", "Hypertension"]
}
# Prompt for the LLM
prompt = f"""
You are a healthcare AI assistant. Given the following patient data:
- Age: {patient_data['age']}
- Symptoms: {', '.join(patient_data['symptoms'])}
- Medical History: {', '.join(patient_data['medical_history'])}
Suggest possible diagnoses and recommended tests in JSON format.
"""
# Function to call the LLM API
def analyze_healthcare(prompt):
try:
response = requests.post(
"https://api.x.ai/grok3",
headers={"Authorization": "Bearer YOUR_API_KEY"},
json={"prompt": prompt}
)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
return {"error": f"API request failed: {str(e)}"}
# Execute and print analysis
result = analyze_healthcare(prompt)
print(json.dumps(result, indent=2))
Explanation:
The script processes patient data and generates diagnostic suggestions.
JSON output ensures compatibility with healthcare systems.
7.6 Code Snippet: Full Project Implementation
Below is a full implementation of the personal AI assistant project, expanding on Section 7.4.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Personal AI Assistant</title>
<script src="https://cdn.jsdelivr.net/npm/react@18.2.0/umd/react.development.js"></script>
<script src="https://cdn.jsdelivr.net/npm/react-dom@18.2.0/umd/react-dom.development.js"></script>
<script src="https://cdn.jsdelivr.net/npm/babel@7.18.6/standalone/babel.min.js"></script>
<script src="https://cdn.tailwindcss.com"></script>
</head>
<body>
<div id="root"></div>
<script type="text/babel">
const { useState, useEffect } = React;
function Assistant() {
const [input, setInput] = useState("");
const [response, setResponse] = useState("");
const [history, setHistory] = useState([]);
const handleSubmit = async () => {
if (!input.trim()) {
setResponse("Error: Please enter a valid input.");
return;
}
const prompt = `
You are a personal AI assistant. Respond to the following user input:
"${input}"
If the input is a task (e.g., "Schedule a meeting"), create a task in JSON format.
If the input is a question, provide a concise answer.
For other inputs, respond conversationally.
`;
try {
const res = await fetch("https://api.x.ai/grok3", {
method: "POST",
headers: { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json" },
body: JSON.stringify({ prompt })
});
const data = await res.json();
setResponse(JSON.stringify(data, null, 2));
setHistory([...history, { input, response: data }]);
} catch (error) {
setResponse(`Error: ${error.message}`);
}
};
return (
<div className="max-w-2xl mx-auto p-4">
<h1 className="text-2xl font-bold mb-4">Personal AI Assistant</h1>
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
className="w-full p-2 border rounded mb-2"
placeholder="Ask a question or schedule a task..."
/>
<button
onClick={handleSubmit}
className="bg-blue-500 text-white p-2 rounded"
>
Submit
</button>
<pre className="mt-4 p-4 bg-gray-100 rounded">{response}</pre>
<h2 className="text-xl font-semibold mt-4">History</h2>
<ul className="mt-2">
{history.map((item, index) => (
<li key={index} className="p-2 border-b">
<strong>Input:</strong> {item.input}<br />
<strong>Response:</strong> {JSON.stringify(item.response, null, 2)}
</li>
))}
</ul>
</div>
);
}
ReactDOM.render(<Assistant />, document.getElementById("root"));
</script>
</body>
</html>
Explanation:
This enhanced version includes a history feature to track user interactions.
It handles empty inputs with a user-friendly error message.
The interface remains responsive and styled with Tailwind CSS.
7.7 Best Practices in Case Studies
Clarity in Prompts: Use specific, context-rich prompts to avoid ambiguous outputs.
Structured Outputs: Request JSON or formatted text for easy integration.
Iterative Testing: Test prompts with varied inputs to ensure robustness.
User-Centric Design: Tailor prompts to the end user’s needs (e.g., plain language for legal summaries).
Scalability: Design prompts to handle large datasets or complex queries.
7.8 Exception Handling in Projects
Exception handling is critical for robust AI applications. Common issues include:
API Failures: Handle network errors or rate limits with retries or fallbacks.
Invalid Inputs: Validate user inputs before sending to the LLM.
Ambiguous Outputs: Use fallback prompts if the LLM returns unclear results.
Example:
def safe_api_call(prompt):
max_retries = 3
for attempt in range(max_retries):
try:
response = requests.post(
"https://api.x.ai/grok3",
headers={"Authorization": "Bearer YOUR_API_KEY"},
json={"prompt": prompt}
)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
if attempt == max_retries - 1:
return {"error": f"Failed after {max_retries} attempts: {str(e)}"}
time.sleep(2 ** attempt) # Exponential backoff
Explanation:
The script retries API calls with exponential backoff.
It returns an error message after exhausting retries.
7.9 Pros, Cons, and Alternatives
Pros:
Efficiency: AI prompt engineering automates complex tasks like recommendations and analysis.
Scalability: Easily adapts to various domains (e-commerce, legal, healthcare).
User-Friendly: Structured prompts produce consistent, usable outputs.
Cons:
Dependency on LLMs: Requires reliable API access and costs.
Prompt Sensitivity: Small changes in prompts can lead to varied outputs.
Data Privacy: Sensitive data (e.g., legal or health records) requires secure handling.
Alternatives:
Rule-Based Systems: Traditional algorithms for recommendations or analysis, though less flexible.
Pre-Trained Models: Fine-tuned models for specific tasks, reducing prompt engineering needs.
No-Code Platforms: Tools like Zapier or Bubble for simpler integrations.
Conclusion
Module 7 of AI Prompt Engineering provides a comprehensive guide to applying AI in real-world scenarios. Through case studies in e-commerce, legal analysis, and creative writing, and a hands-on project for a personal AI assistant, you’ve learned to design effective prompts, implement robust code, and handle exceptions. By integrating AI into healthcare and finance and following best practices, you’re equipped to build scalable, user-friendly solutions. Continue experimenting with prompts and exploring new applications to master AI prompt engineering.
No comments:
Post a Comment
Thanks for your valuable comment...........
Md. Mominul Islam