Claude No-Code
Leverage Claude CLI and Agent SDK for research, writing, document processing, and automation without writing code. Terminal commands and copy-paste scripts only.
Overview
Claude CLI and Agent SDK enable rigorous research, analysis, and documentation workflows without programming. Terminal commands and copy-paste scripts only.
Core Capabilities
Research Automation
- Academic literature review and synthesis
- Source verification and citation management
- Cross-reference validation
- Evidence gathering from primary sources
Document Quality
- IEEE citation verification
- Dual-reader design testing (scannable + detailed paths)
- Logical argument structure analysis
- Counter-evidence identification
Content Analysis
- Extract structured data from unstructured sources
- Thematic categorization
- Comparative analysis across documents
- Pattern recognition in large corpora
Workflow Automation
- Batch document processing with quality standards
- Scheduled research digests
- Automated compliance checking
- Reproducible analytical pipelines
Two Execution Modes
1. Interactive CLI (Immediate Tasks)
claude "Verify all IEEE citations in thesis.md match source URLs. Flag discrepancies"
Single-command execution for ad-hoc tasks.
2. Agent Scripts (Complex Workflows)
Copy-paste Python/TypeScript scripts with customization points marked <-- CUSTOMIZE. No code comprehension required.
- Claude Code installed (
brew install --cask claude-code) - Basic terminal navigation (
cd,ls, run commands) - For scripts: Python 3.8+ or Node.js 18+
Quality Standards
This guide emphasizes:
- Reproducibility β Scripts document methodology
- Verification β Built-in quality checks
- Standards compliance β IEEE citations, dual-reader design
- Transparency β No black-box operations
Structure
- Research Workflows β Academic research, literature review, source management
- Writing Automation β Thesis development, analytical framework, citation management
- Document Processing β Batch operations, format conversion, quality assurance
- Data Analysis β Statistical summaries, trend analysis, reporting
- Web Research β Systematic source gathering, monitoring, verification
- Automation Scripts β Production-ready agents for recurring tasks
Research Workflows
Source Quality Verification
Check citation quality against hierarchy:
claude "Read chapter-draft.md. Categorize each source by tier:
β’ Tier 1 (Primary): archives, government docs, original data
β’ Tier 2 (Academic): peer-reviewed journals, university press
β’ Tier 3 (Secondary): well-documented analytical work
β’ Tier 4 (Use sparingly): journalism, opinion
Flag Tier 4 sources used for factual claims. Suggest Tier 1-2 alternatives"
Verify IEEE citation format:
claude "Audit thesis.md for IEEE citation compliance:
1. Check inline format: [n], [12][13], [8]-[11]
2. Verify References section has: Author, 'Title,' Publication, Date, URL
3. Confirm citation numbers match first appearance order
4. Flag any broken or inaccessible URLs
Generate compliance report"
Literature Review with Standards
Systematic source gathering:
claude "Research 'carbon accounting methodology 1990-2010':
1. Search Google Scholar, JSTOR, archive.org
2. Prioritize: peer-reviewed articles, government reports, primary data
3. For each source: extract title, authors, year, venue, DOI/URL
4. Create annotated bibliography with:
β’ Source tier (1-4)
β’ Key findings (2-3 sentences)
β’ Relevance to carbon accounting methodology
β’ Citation quality assessment
Save as literature-review.md with IEEE format"
Cross-reference verification:
claude "Read sources.md (contains 15 papers on monetary policy).
For claim: 'Central banks increased rates 12 times between 2000-2005'
1. Identify which sources support this claim
2. Check if numbers match across sources
3. Flag discrepancies
4. Assess: Do we have 2+ independent sources? (minimum standard)
Generate verification report"
Dual-Reader Design Testing
Test scannable path:
claude "Audit chapter03.md for dual-reader compliance:
CURSORY PATH (15-30 min):
β’ Can reader get core argument from: Executive Frame β headers β Synthesis?
β’ Test: Read only those sections. Does thesis flow make sense?
DETAILED PATH (2-4 hours):
β’ Full evidence and citations
β’ Follow 'Analytic note:' sections
Report: Do both paths work independently? Flag sections that break either path"
Both reading paths must be complete and coherent on their own. Cursory readers should get the thesis without reading evidence. Detailed readers should see explicit analytical connections.
Thesis Connection Analysis
Generate βAnalytic note:β sections:
claude "Read chapter-draft.md.
For each major section of evidence:
1. Identify the factual claim or pattern documented
2. Generate an 'Analytic note:' paragraph explaining:
β’ How this evidence connects to core thesis
β’ What it reveals about the larger argument
β’ Why this matters for the overall narrative
Insert 'Analytic note:' sections after each evidence block.
Ensure explicit connection, no assumed leaps"
Counter-Evidence and Critique
Test argument strength:
claude "Read thesis-chapter.md. Play adversarial critic:
1. Identify central claims
2. Search for counter-evidence that challenges each claim
3. Find contradictory data, alternative interpretations
4. Assess: Are counter-arguments addressed? If not, flag as weakness
5. Suggest how to strengthen argument against critique
Generate adversarial review report"
Detect retrospective bias:
claude "Audit historical-analysis.md for teleological reasoning:
Flag any language suggesting:
β’ Inevitability ('had to happen', 'naturally led to')
β’ Predetermined outcomes ('destined', 'inevitable')
β’ Retrospective justification (explaining past by future)
For each flag: suggest neutral phrasing that preserves contingency"
Citation Management
Build bibliography from PDFs:
claude "Process all PDFs in ./sources/:
1. Extract: Author, Title, Publication, Year, DOI/URL
2. Categorize by source tier (1-4)
3. Format as IEEE citations
4. Sort by: tier (1-4), then chronologically
5. Generate bibliography.md with quality ratings
For any source missing DOI/URL: flag as inaccessible"
Cross-verify contested claims:
claude "Claim: 'Manufacturing employment fell 40% in 1980-2000'
Required: Minimum 2 independent Tier 1-2 sources
1. Search: government labor statistics, academic studies
2. Find: 2+ sources confirming this figure
3. Check: Do numbers match exactly? If variance, document range
4. Assess: Source independence (not citing each other)
Generate verification report with citations"
Writing & Content Creation
Drafting Content
Generate report from data:
claude "Read sales-data.csv. Write a 2-page executive summary highlighting trends, top performers, and recommendations. Save as report.md"
Create documentation from code:
claude "Read all .py files in src/. Generate user-facing documentation explaining what each module does. Use non-technical language. Save to docs/user-guide.md"
Editing & Refinement
Improve clarity and conciseness:
claude "Edit draft.md for clarity. Remove jargon, shorten sentences, and improve flow. Preserve all technical details. Save to draft-v2.md"
Check consistency:
claude "Read style-guide.md and article.md. Ensure the article follows the style guide's rules for tone, terminology, and formatting. List violations and suggest fixes"
Content Transformation
Convert formats:
claude "Convert meeting-notes.txt to a structured markdown document with sections: Attendees, Discussion Points, Action Items, Next Steps"
Expand bullet points to prose:
claude "Read outline.md. Expand each bullet point into a full paragraph. Maintain logical flow between sections. Save as expanded-draft.md"
Email and Communication
Draft professional emails:
claude "Write a professional email to stakeholders summarizing project-status.md. Tone: optimistic but realistic. Length: 3-4 paragraphs. Include next steps"
Generate responses to common queries:
claude "Read our FAQ.md. Create template responses for the top 5 questions. Use a friendly, helpful tone. Save as email-templates.md"
Process multiple files at once: claude "Edit all .md files in ./drafts/ for grammar and style. Save edited versions with '-edited' suffix"
Content Generation from Templates
Fill in templates with data:
claude "Using template.md as the structure and data.json for values, generate 10 personalized letters. Save as letter-001.md through letter-010.md"
Create variations:
claude "Read blog-post.md. Create 3 variations optimized for: LinkedIn (professional), Twitter thread (conversational), and newsletter (detailed). Save separately"
Quality Assurance
Fact-check claims:
claude "Read article.md. Identify all factual claims. Search the web to verify each claim. Mark verified, unverified, or contradicted claims"
Check tone and style:
claude "Analyze tone of customer-email.md. Is it professional, empathetic, and clear? Suggest improvements if needed"
Multi-Document Writing Projects
Maintain consistency across documents:
claude "Read all chapters in ./book-chapters/. Check for: consistent character descriptions, timeline continuity, and terminology. List inconsistencies"
Generate table of contents:
claude "Scan all .md files in this directory. Generate a hierarchical table of contents with page numbers (estimate ~500 words per page). Save as toc.md"
Document Processing
Batch File Operations
Rename files based on content:
claude "Read each PDF in ./invoices/. Rename to format: YYYY-MM-DD_VendorName_Amount.pdf based on the invoice content"
Organize files into folders:
claude "Scan all documents in this directory. Create folders by category (contracts, invoices, reports, misc). Move files to appropriate folders based on content"
Data Extraction
Extract information from PDFs:
claude "Read all PDFs in ./receipts/. Extract: date, vendor, amount, category. Save as expenses.csv"
Parse structured documents:
claude "Read contract.pdf. Extract key terms: parties involved, effective date, termination date, payment terms, obligations. Save as contract-summary.md"
Format Conversion
Convert to markdown:
claude "Convert report.docx to clean markdown. Preserve headings, lists, tables, and emphasis. Save as report.md"
Create plain text from complex documents:
claude "Extract all text from presentation.pptx. Organize by slide number with slide titles. Save as presentation-transcript.txt"
Document Analysis
Compare versions:
claude "Compare contract-v1.pdf and contract-v2.pdf. List all changes, additions, and deletions. Highlight significant legal differences"
Identify missing information:
claude "Read application-form.pdf. List all required fields that are blank or incomplete"
Ensure you have permission to process documents containing sensitive information. Claude processes data securely, but be mindful of organizational policies.
Document Generation
Create reports from data:
claude "Read metrics.json. Generate a formatted PDF-ready markdown report with charts described in text, summary statistics, and trend analysis"
Merge multiple documents:
claude "Combine intro.md, methods.md, results.md, and conclusion.md into a single cohesive document. Add transitions between sections. Save as full-report.md"
Batch Processing Workflows
Process invoice batch:
claude "For each PDF in ./incoming-invoices/: 1) Extract vendor, date, amount, items; 2) Verify totals are correct; 3) Flag any discrepancies; 4) Move processed files to ./processed/; 5) Save summary to invoice-log.csv"
Document quality check:
claude "Scan all .docx files in ./submissions/. Check each for: required sections, proper formatting, reference completeness. Generate quality-report.md with pass/fail status for each document"
Archive Management
Create document index:
claude "Scan ./archive/ recursively. For each document, extract: filename, date created, document type, key subjects/entities mentioned. Save as archive-index.csv"
Summarize old records:
claude "Read all documents in ./2020-records/. Create a year-end summary covering: major events, key decisions, outstanding items. Save as 2020-summary.md"
Data Analysis (No Code Required)
Basic Data Exploration
Understand your dataset:
claude "Analyze sales.csv. Tell me: number of rows, column names and types, date range, any missing values. Show first 5 rows as a sample"
Get summary statistics:
claude "Read revenue-data.csv. Calculate: total revenue, average per month, highest/lowest months, year-over-year growth. Present as a formatted table"
Data Cleaning
Fix formatting issues:
claude "Read messy-data.csv. Fix issues: inconsistent date formats, remove duplicate rows, standardize column names (lowercase, no spaces). Save as clean-data.csv"
Handle missing data:
claude "Analyze customer-data.csv. For missing values: fill missing cities with 'Unknown', missing ages with median age, drop rows with missing email. Save as complete-data.csv"
Filtering and Subsetting
Extract specific records:
claude "From transactions.csv, extract all purchases over $1000 in Q4 2024. Save to high-value-q4.csv"
Create subsets:
claude "Split employees.csv into separate files by department. Name each file: dept-<DepartmentName>.csv"
Data Transformation
Aggregate data:
claude "Read daily-sales.csv. Group by month and calculate: total sales, average order value, number of transactions. Save as monthly-summary.csv"
Pivot and reshape:
claude "Transform survey-responses.csv from long format to wide format. Each respondent should be one row with question responses as columns. Save as survey-wide.csv"
Instead of specifying exact operations, describe what you want: βI need to see sales trends by region over timeβ rather than βpivot by X and sum Yβ
Generating Insights
Identify trends:
claude "Analyze website-traffic.csv over the past 6 months. Identify: traffic trends, peak days/times, sources with highest growth. Write a 1-page summary with key findings"
Find patterns:
claude "Examine customer-churn.csv. What characteristics are most common among customers who churned? Create a profile of at-risk customers"
Data Visualization (Text-Based)
Create ASCII charts:
claude "Read monthly-revenue.csv. Create a text-based bar chart showing revenue by month. Include a trend line description"
Generate chart specifications:
claude "Analyze sales-by-region.csv. Write detailed instructions for creating: 1) a stacked bar chart of sales by region and product category, 2) a line chart of monthly trends. Include axis labels, colors, and title suggestions"
Comparison and Benchmarking
Compare datasets:
claude "Compare 2023-sales.csv and 2024-sales.csv. Calculate year-over-year changes for each product category. Identify biggest winners and losers. Save as yoy-comparison.csv"
Benchmark against targets:
claude "Read actual-performance.csv and targets.csv. For each metric, calculate: actual value, target, variance (absolute and %), and status (on track / below / exceeded). Save as performance-report.csv"
Reporting
Generate executive summary:
claude "Analyze quarterly-metrics.csv. Create an executive summary covering: overall performance vs. last quarter, top 3 achievements, top 3 concerns, recommendations. Format as markdown"
Create recurring reports:
claude "Read this week's activity-log.csv. Generate a weekly report with: total hours, breakdown by project, completion rate, blockers encountered. Use the same format as last week's report.md"
Working with Multiple Data Files
Merge datasets:
claude "Combine customer-info.csv and purchase-history.csv using customer_id as the key. Include all customers even if they have no purchases. Save as customer-complete.csv"
Cross-reference data:
claude "Check inventory.csv against orders.csv. Identify products that are: low stock (<10 units) AND have >20 pending orders. Flag as urgent restock"
Web Research & Monitoring
Competitive Intelligence
Track competitor activity:
claude "Search the web for recent news about [CompanyName]. Focus on: product launches, funding, partnerships, leadership changes. Summarize findings with dates and sources"
Compare offerings:
claude "Research pricing for project management tools: Asana, Monday.com, Jira, ClickUp. Create a comparison table: price tiers, features, user limits, integrations. Include source URLs"
Market Research
Identify trends:
claude "Search for 'sustainability in fashion industry 2024'. Find 5-7 recent articles from credible sources. Extract key trends, statistics, and predictions. Summarize in a 2-page report"
Gather customer feedback:
claude "Search for reviews of [ProductName] on Reddit, Twitter, and review sites. Categorize feedback into: praise, complaints, feature requests. Tally frequency of each theme"
Academic and Technical Research
Find recent publications:
claude "Search Google Scholar for papers on 'federated learning privacy' published in 2024. List top 10 by citation count with: title, authors, venue, citation count, brief summary"
Gather technical documentation:
claude "Find official documentation for PostgreSQL full-text search. Extract: setup instructions, configuration options, example queries, performance tips. Save as postgres-fts-guide.md"
News Monitoring
Daily news digest:
claude "Search for today's news on 'artificial intelligence regulation'. Find 5 most relevant articles. For each: headline, source, 2-sentence summary, URL. Format as daily-brief.md"
Track specific topics over time:
claude "Search for news about 'renewable energy policy' from the past week. Organize chronologically. Identify: major developments, key players, regional differences. Save as weekly-energy-update.md"
Claude retrieves information from web searches. Always verify critical information from primary sources, especially for business or academic use.
Industry Analysis
Collect industry reports:
claude "Find recent analyst reports on the cybersecurity market. Search for: market size, growth projections, key vendors, emerging trends. Cite sources with URLs"
Identify key players:
claude "Research companies in the 'AI chip manufacturing' space. List: company names, headquarters, recent funding, key products, market position. Create a landscape overview"
Event Monitoring
Track conferences and events:
claude "Find upcoming AI/ML conferences in 2025. For each: name, dates, location, focus areas, submission deadlines. Sort chronologically. Save as conferences-2025.md"
Summarize event outcomes:
claude "Search for coverage of [ConferenceName] that happened last week. What were the major announcements, keynotes, and themes? Compile into a post-event summary"
Price and Product Monitoring
Track price changes:
claude "Search for current price of [ProductName] across major retailers: Amazon, BestBuy, Walmart, Target. Note sale prices and stock availability. Create price-comparison.md"
Product launch tracking:
claude "Research upcoming tech product launches in Q1 2025. Focus on: smartphones, laptops, AI products. For each: product name, company, expected release date, rumored specs"
Regulatory and Compliance Research
Monitor regulatory changes:
claude "Search for recent GDPR enforcement actions in 2024. Summarize: companies fined, violation type, penalty amount, key takeaways for compliance"
Track policy developments:
claude "Research current state of AI regulation in the EU, US, and China. Compare approaches: what's regulated, enforcement mechanisms, timeline. Create policy-comparison.md"
Social Listening
Brand mentions:
claude "Search Twitter, Reddit, and news sites for mentions of [BrandName] this month. Categorize sentiment: positive, negative, neutral. Highlight any viral posts or emerging issues"
Topic analysis:
claude "What are people saying about 'remote work productivity' on social media? Extract common themes, concerns, and tips. Summarize popular opinions"
Simple Automation Scripts
For recurring tasks, use these ready-made Python scripts. No coding knowledge requiredβjust copy, paste, customize, and run.
Prerequisites
# Install Claude Agent SDK (one-time setup)
pip install claude-agent-sdk
Daily Research Digest
Automatically compile research on a topic every day.
import asyncio
from claude_agent_sdk import query, ClaudeAgentOptions
from datetime import date
async def daily_research_digest():
topic = "quantum computing" # <-- CUSTOMIZE THIS
output_file = f"digest-{date.today()}.md"
options = ClaudeAgentOptions(
allowed_tools=["WebSearch", "Write"],
permission_mode="acceptEdits"
)
prompt = f"""
Search the web for today's news and developments about '{topic}'.
Find 5-7 relevant articles from the past 24 hours.
Create a digest with:
- Article headline and source
- 2-sentence summary
- Why it matters
- URL
Save to {output_file}
"""
async for msg in query(prompt=prompt, options=options):
pass # Agent handles everything
print(f"β
Digest saved to {output_file}")
asyncio.run(daily_research_digest()) import { query } from "@anthropic-ai/claude-agent-sdk";
async function dailyResearchDigest() {
const topic = "quantum computing"; // <-- CUSTOMIZE THIS
const outputFile = `digest-${new Date().toISOString().split('T')[0]}.md`;
const prompt = `
Search the web for today's news and developments about '${topic}'.
Find 5-7 relevant articles from the past 24 hours.
Create a digest with:
- Article headline and source
- 2-sentence summary
- Why it matters
- URL
Save to ${outputFile}
`;
for await (const msg of query({
prompt,
options: {
allowedTools: ["WebSearch", "Write"],
permissionMode: "acceptEdits"
}
})) {
// Agent handles everything
}
console.log(`β
Digest saved to ${outputFile}`);
}
dailyResearchDigest(); Run it:
python daily_digest.py
Automate it (run every morning at 9am):
# Add to crontab (macOS/Linux)
0 9 * * * cd /path/to/scripts && python daily_digest.py
Batch Document Summarizer
Summarize all PDFs in a folder automatically.
import asyncio
from claude_agent_sdk import query, ClaudeAgentOptions
async def batch_summarize():
folder = "./documents" # <-- CUSTOMIZE THIS
options = ClaudeAgentOptions(
allowed_tools=["Read", "Write", "Glob"],
permission_mode="acceptEdits"
)
prompt = f"""
Find all PDF files in {folder}.
For each PDF:
1. Read the content
2. Create a 3-paragraph summary
3. Append to summaries.md with format:
## [Filename]
[Summary]
---
"""
async for msg in query(prompt=prompt, options=options):
pass
print("β
All PDFs summarized in summaries.md")
asyncio.run(batch_summarize()) import { query } from "@anthropic-ai/claude-agent-sdk";
async function batchSummarize() {
const folder = "./documents"; // <-- CUSTOMIZE THIS
const prompt = `
Find all PDF files in ${folder}.
For each PDF:
1. Read the content
2. Create a 3-paragraph summary
3. Append to summaries.md with format:
## [Filename]
[Summary]
---
`;
for await (const msg of query({
prompt,
options: {
allowedTools: ["Read", "Write", "Glob"],
permissionMode: "acceptEdits"
}
})) {
// Agent handles everything
}
console.log("β
All PDFs summarized in summaries.md");
}
batchSummarize(); Weekly Report Generator
Generate a weekly summary from daily logs.
import asyncio
from claude_agent_sdk import query, ClaudeAgentOptions
from datetime import date, timedelta
async def weekly_report():
# Automatically calculates last week's date range
today = date.today()
week_ago = today - timedelta(days=7)
options = ClaudeAgentOptions(
allowed_tools=["Read", "Write", "Glob"],
permission_mode="acceptEdits"
)
prompt = f"""
Read all files in ./daily-logs/ from the past 7 days.
Create a weekly report (week-{today}.md) with:
1. Executive Summary (2-3 sentences)
2. Key Achievements (bullet list)
3. Challenges Encountered (bullet list)
4. Metrics Summary (hours worked, tasks completed)
5. Next Week's Priorities
Use professional tone. Include specific numbers.
"""
async for msg in query(prompt=prompt, options=options):
pass
print(f"β
Weekly report generated: week-{today}.md")
asyncio.run(weekly_report()) import { query } from "@anthropic-ai/claude-agent-sdk";
async function weeklyReport() {
const today = new Date().toISOString().split('T')[0];
const prompt = `
Read all files in ./daily-logs/ from the past 7 days.
Create a weekly report (week-${today}.md) with:
1. Executive Summary (2-3 sentences)
2. Key Achievements (bullet list)
3. Challenges Encountered (bullet list)
4. Metrics Summary (hours worked, tasks completed)
5. Next Week's Priorities
Use professional tone. Include specific numbers.
`;
for await (const msg of query({
prompt,
options: {
allowedTools: ["Read", "Write", "Glob"],
permissionMode: "acceptEdits"
}
})) {
// Agent handles everything
}
console.log(`β
Weekly report generated: week-${today}.md`);
}
weeklyReport(); Expense Report from Receipts
Process receipt images and generate expense report CSV.
import asyncio
from claude_agent_sdk import query, ClaudeAgentOptions
async def process_receipts():
receipts_folder = "./receipts" # <-- CUSTOMIZE THIS
options = ClaudeAgentOptions(
allowed_tools=["Read", "Write"],
permission_mode="acceptEdits"
)
prompt = f"""
Scan all images in {receipts_folder}.
For each receipt, extract:
- Date
- Vendor name
- Total amount
- Category (meals, travel, supplies, etc.)
Create expenses.csv with columns: Date, Vendor, Amount, Category
Sort by date (newest first).
"""
async for msg in query(prompt=prompt, options=options):
pass
print("β
Expense report saved to expenses.csv")
asyncio.run(process_receipts()) import { query } from "@anthropic-ai/claude-agent-sdk";
async function processReceipts() {
const receiptsFolder = "./receipts"; // <-- CUSTOMIZE THIS
const prompt = `
Scan all images in ${receiptsFolder}.
For each receipt, extract:
- Date
- Vendor name
- Total amount
- Category (meals, travel, supplies, etc.)
Create expenses.csv with columns: Date, Vendor, Amount, Category
Sort by date (newest first).
`;
for await (const msg of query({
prompt,
options: {
allowedTools: ["Read", "Write"],
permissionMode: "acceptEdits"
}
})) {
// Agent handles everything
}
console.log("β
Expense report saved to expenses.csv");
}
processReceipts(); - Change values marked with
<-- CUSTOMIZE THIS - Modify the prompt text to match your needs
- Run the script:
python script.pyornpx tsx script.ts - Review output and adjust as needed
Data Analysis Report
Analyze CSV and generate insights automatically.
import asyncio
from claude_agent_sdk import query, ClaudeAgentOptions
async def analyze_data():
data_file = "sales-data.csv" # <-- CUSTOMIZE THIS
options = ClaudeAgentOptions(
allowed_tools=["Read", "Write"],
permission_mode="acceptEdits"
)
prompt = f"""
Analyze {data_file}.
Create analysis-report.md with:
1. Dataset Overview (rows, columns, date range)
2. Summary Statistics (totals, averages, growth rates)
3. Key Insights (top 3 findings)
4. Trends (what's increasing/decreasing)
5. Recommendations (3 actionable suggestions)
Include specific numbers. Use markdown tables where helpful.
"""
async for msg in query(prompt=prompt, options=options):
pass
print("β
Analysis saved to analysis-report.md")
asyncio.run(analyze_data()) import { query } from "@anthropic-ai/claude-agent-sdk";
async function analyzeData() {
const dataFile = "sales-data.csv"; // <-- CUSTOMIZE THIS
const prompt = `
Analyze ${dataFile}.
Create analysis-report.md with:
1. Dataset Overview (rows, columns, date range)
2. Summary Statistics (totals, averages, growth rates)
3. Key Insights (top 3 findings)
4. Trends (what's increasing/decreasing)
5. Recommendations (3 actionable suggestions)
Include specific numbers. Use markdown tables where helpful.
`;
for await (const msg of query({
prompt,
options: {
allowedTools: ["Read", "Write"],
permissionMode: "acceptEdits"
}
})) {
// Agent handles everything
}
console.log("β
Analysis saved to analysis-report.md");
}
analyzeData(); Next Steps
Make Scripts Yours:
- Copy any script above
- Change the
CUSTOMIZE THISvalues - Modify prompts to match your workflow
- Save as a
.pyor.tsfile - Run whenever needed
Schedule Automation:
- macOS/Linux: Use
cronfor scheduled runs - Windows: Use Task Scheduler
- Cloud: Deploy to AWS Lambda, Google Cloud Functions, or similar
You now have production-ready automation without writing a single line of code from scratch.